Skip to content

Conversation

@Berezin-Leonid
Copy link

@Berezin-Leonid Berezin-Leonid commented Dec 28, 2025

Allow CLI arguments to override checkpoint hyperparameters in LightningCLI

What does this PR do?

Currently, when LightningCLI loads a configuration from a checkpoint (using --ckpt_path), the hyperparameters stored in the checkpoint overwrite any arguments passed via the command line. This behavior prevents users from overriding specific parameters (e.g., changing the learning rate) when resuming training or fine-tuning.

This PR adjusts the configuration merging logic. Now, arguments provided explicitly via the CLI (including parameters defined in config files via --config) take precedence over the hyperparameters loaded from the checkpoint.

Example:
With this change, the following command will correctly use lr=0.001 instead of the value stored in epoch=1.ckpt:

python main.py fit --ckpt_path=epoch=1.ckpt --model.learning_rate=0.001

Implementation Details

To achieve this, I modified the loading logic so that hyperparameters from the checkpoint are applied as parser defaults rather than being merged directly into the configuration object.

This leverages the standard jsonargparse priority order:

  1. Command Line Arguments (Highest priority)
  2. Config Files (e.g., --config)
  3. Defaults (now includes Checkpoint Hyperparameters)

This ensures that any value explicitly provided by the user will correctly override the value stored in the checkpoint.

Addressing the conversation in #21255 about CLI override priority.

Before submitting
  • Was this discussed/agreed via a GitHub issue? (not for typos and docs)
  • Did you read the contributor guideline, Pull Request section?
  • Did you make sure your PR does only one thing, instead of bundling different changes together?
  • Did you make sure to update the documentation with your changes? (if necessary)
  • Did you write any new necessary tests? (not for typos and docs)
  • Did you verify new and existing tests pass locally with your changes?
  • Did you list all the breaking changes introduced by this pull request?
  • Did you update the CHANGELOG? (not for typos, docs, test updates, or minor internal changes/refactors)

PR review

Anyone in the community is welcome to review the PR.
Before you start reviewing, make sure you have read the review guidelines. In short, see the following bullet-list:

Reviewer checklist
  • Is this pull request ready for review? (if not, please submit in draft mode)
  • Check that all items from Before submitting are resolved
  • Make sure the title is self-explanatory and the description concisely explains the PR
  • Add labels and milestones (and optionally projects) to the PR so it can be classified

📚 Documentation preview 📚: https://pytorch-lightning--21455.org.readthedocs.build/en/21455/

    Allow CLI arguments to override checkpoint hyperparameters in LightningCLI
@github-actions github-actions bot added the pl Generic label for PyTorch Lightning package label Dec 28, 2025
Leonid added 2 commits December 28, 2025 23:54
Updated `test_lightning_cli_ckpt_path_argument_hparams_subclass_mode` to match the new resolution logic.

Previosly, the test expected the model class from the checkpoint to override the `--model` argument provided in the CLI.
With this fix, explicit CLI arguments take precedence. the test now assert that `BoringCkptPathModel` (provided via CLI) is instantiated instead of `BoringCkptPathSubclass` (stored in checkpoint).

Also updated `test_lightning_cli_ckpt_path_argument_hparams` to catch `KeyError` (NSKeyError) instaed of expecting `SystemExit`, as the new `set_defaults` mechanism raises a precise key error on mismatch.
Copy link
Contributor

@mauvilsa mauvilsa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of setting the hparams as defaults. So it seems this wasn't as difficult as I thought. There are a few details I think should change as I comment below.

Please change the pull request description avoiding Fixes #21255. I mean, merging this pull request should not close #21255, since it isn't an actual fix for it. It would be a fix for your specific comment, but not the entire issue. As a side note, the fix for #21255 would be #21408.

Comment on lines 593 to 595
except KeyError:
sys.stderr.write("Parsing of ckpt_path hyperparameters failed!\n")
raise
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is changing the prior behavior, going from an exit that prints the usage, to an ugly exception with a long stack trace. The behavior should not change. Maybe it would work correctly by doing

Suggested change
except KeyError:
sys.stderr.write("Parsing of ckpt_path hyperparameters failed!\n")
raise
except KeyError as ex:
sys.stderr.write("Parsing of ckpt_path hyperparameters failed!\n")
parser.error(str(ex), ex)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

except KeyError:
sys.stderr.write("Parsing of ckpt_path hyperparameters failed!\n")
raise
self.parse_arguments(parser, args)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like this the entire parsing is done twice. The speed is not a problem, since training/predicting is normally significantly slower than the parsing. However, there might be some edge case where the first parsing fails, even though this second one would succeed. But, I am fine with leaving it like this. If there are problematic edge cases, then we can fix them when they happen.

Comment on lines -559 to -561
assert isinstance(cli.model, BoringCkptPathSubclass)
assert not isinstance(cli.model, BoringCkptPathSubclass)
assert cli.model.hidden_dim == 8
assert cli.model.extra is True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did this change? Old behaviors should not change, unless there was bug. Was there a bug? If not, revert back.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the old behavior was effectively a bug. It violated the standard priority Checkpoint < CLI args.

Previously, explicit flags like --model=BoringCkptPathModel were ignored in favor of the checkpoint class. The updated test confirms that the CLI argument now correctly takes precedence.

This also complements PR #21408: while that PR handles parameter adaptation, this ensures the class itself can be overridden. Together, they enable a clean Training -> Inference workflow with different classes.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One drawback of setting hparams as defaults is that the typical left-to-right parsing priority of command line arguments is not respected. For example, a command like --model=BoringCkptPathModel --ckpt_path=... can be confusing, because --ckpt_path appears after --model, and arguments to the right usually take precedence. Since the checkpoint specifies a particular model, it would be logical for it to override the model argument. It might make sense to require that if --ckpt_path is provided, it should be the first argument after the subcommand. This would maintain the convention that arguments on the right have higher priority. However, enforcing --ckpt_path as the first argument could be difficult to implement cleanly.

Ignoring the order of command line arguments, the reason for including the --model=BoringCkptPathModel argument in this test is that the model is required. Without this argument, parsing fails due to the missing model. In fact, this is a case where the initial parsing fails, but the second parsing (after applying checkpoint hparams defaults) succeeds. Ideally, if a checkpoint is provided, specifying a model argument should not be mandatory, since the checkpoint already contains one. The model argument could be optional and used to override the checkpoint's model if given, but not required. However, implementing this might be tricky. The parser must still mark the model as required; otherwise, the output of --help would be misleading. One possible solution is to temporarily patch the subparsers during parsing to make the model argument optional, and then, if the model is not provided, call parser.error.

assert cli.config_init.predict.model.layer.out_features == 3

err = StringIO()
with mock.patch("sys.argv", ["any.py"] + cli_args), redirect_stderr(err), pytest.raises(SystemExit):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
with mock.patch("sys.argv", ["any.py"] + cli_args), redirect_stderr(err), pytest.raises(SystemExit):
with mock.patch("sys.argv", ["any.py"] + cli_args), redirect_stderr(err), pytest.raises(SystemExit):

Revert this back, since this behavior should not change.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Comment on lines 652 to 655
target_classes = 10
assert new_cli.model.num_classes == target_classes, (
f"Checkpoint restoration failed! Expected num_classes {target_classes}, got {new_cli.model.num_classes}"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

num_classes=10 is the default of the model and not overridden anywhere. Maybe this assert is unnecessary?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done. Removed the redundant assertion.

- Changed error in _parse_ckpt_path and tests for it
@Berezin-Leonid
Copy link
Author

Berezin-Leonid commented Jan 12, 2026

I've addressed all the feedback
Ready for review!

@codecov
Copy link

codecov bot commented Jan 14, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 79%. Comparing base (9df1910) to head (fb86acf).
⚠️ Report is 1 commits behind head on master.
✅ All tests successful. No failed tests found.

❗ There is a different number of reports uploaded between BASE (9df1910) and HEAD (fb86acf). Click for more details.

HEAD has 717 uploads less than BASE
Flag BASE (9df1910) HEAD (fb86acf)
cpu 196 33
python 18 3
lightning_fabric 53 0
pytest 98 0
python3.12 53 9
python3.10 18 3
lightning 90 15
python3.12.7 54 9
python3.11 36 6
python3.13 17 3
pytorch2.2.2 9 3
pytest-full 98 33
pytorch2.8 18 6
pytorch_lightning 53 18
pytorch2.4.1 9 3
pytorch2.1 18 6
pytorch2.7 9 3
pytorch2.6 9 3
pytorch2.3 9 3
pytorch2.9 8 3
pytorch2.5.1 9 3
Additional details and impacted files
@@            Coverage Diff            @@
##           master   #21455     +/-   ##
=========================================
- Coverage      87%      79%     -8%     
=========================================
  Files         270      267      -3     
  Lines       24059    24008     -51     
=========================================
- Hits        20855    18959   -1896     
- Misses       3204     5049   +1845     

Comment on lines +591 to +592
if parser._subcommands_action is None:
return
Copy link
Contributor

@mauvilsa mauvilsa Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How come you added this? The if condition in line 575 is supposed to already return when no subcommands are defined. Maybe the condition is not robust enough, since it would not work when a model has a subcommand parameter. Still, if there are no subcommands, why execute the lines from 577 up to here? Maybe better instead to change line 575 to if parser._subcommands_action is None or not self.config.get("subcommand"):.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

pl Generic label for PyTorch Lightning package

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants