Skip to content

infer torch.cuda.OutOfMemoryError: CUDA out of memory #10

@OldSixOne

Description

@OldSixOne

48G 的 A6000 跑 报 OOM python inference-sde.py --config ./configs/inference/inferece_sde.yaml --video_root ./configs/inference/case-5/source_images --pose_root ./configs/inference/case-5/target_aligned_poses --ref_pose_root ./configs/inference/case-5/source_poses --source_mask_root ./configs/inference/case-5/source_masks --target_mask_root ./configs/inference/case-5/predicted_masks --cfg 7.08 torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 494.00 MiB (GPU 0; 44.34 GiB total capacity; 42.77 GiB already allocated; 338.81 MiB free; 43.66 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
what should I do?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions