Skip to content

Reslove multi-gpu training error and add description of dataset format and label format #195

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

shyhyawJou
Copy link

@shyhyawJou shyhyawJou commented Apr 27, 2025

if not remove the decorator rank_zero_only, it will encounter an error as shown below, when the users use multi-gpu training and turn-on tensorboard.

Traceback (most recent call last):
  File "/media/user/disk2/mateo/ramen/YOLO-mainz/yolo/lazy.py", line 17, in main
    callbacks, loggers, save_path = setup(cfg)
  File "/media/user/disk2/mateo/ramen/YOLO-mainz/yolo/utils/logging_utils.py", line 286, in setup
    loggers.append(TensorBoardLogger(log_graph="all", save_dir=save_path))
  File "/media/user/disk2/mateo/py310_venv/sushilon/lib/python3.10/site-packages/lightning/pytorch/loggers/tensorboard.py", line 96, in __init__
    super().__init__(
  File "/media/user/disk2/mateo/py310_venv/sushilon/lib/python3.10/site-packages/lightning/fabric/loggers/tensorboard.py", line 98, in __init__
    root_dir = os.fspath(root_dir)
TypeError: expected str, bytes or os.PathLike object, not NoneType

@shyhyawJou shyhyawJou changed the title Multi-GPU training error Reslove multi-gpu training error and add description of dataset format and label format Apr 27, 2025
@sam31046
Copy link

sam31046 commented Jul 30, 2025

I have this issue whatever I set device={cpu/0/[0,1]}. I believe it's because the save_path is not shared between multi-process. save_path is not defined in non-main process. They instead skip the validate_log_directory line in setup function.

I think this PR is a temporary fix. We need to refactor setup a little bit. I'm not familiar with broadcast though.
Check out this DDP discussion, Lightning-AI/pytorch-lightning/issues/18148 @henrytsui000

Helpful way to see who's calling validate_log_directory:

@rank_zero_only 
def validate_log_directory(cfg: Config, base_path: str, exp_name: str): 
    print(f"PID={os.getpid()}, LOCAL_RANK={os.environ.get('LOCAL_RANK', 'NA')}, RANK={os.environ.get('RANK', 'NA')}") 
    print(f"[DEBUG] Called validate_log_directory from rank_zero_only, return will be valid only on rank 0.")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants