Skip to content

Commit 14e351f

Browse files
hiyuhMasaru Kimura
and
Masaru Kimura
authored
Fix LinearLR end_factor default. (#1439)
* Fix LinearLR end_factor default. torch.optim.lr_scheduler.LinearLR end_factor default was 5, but should be 1.0. Most probably, it would be a typo influenced from total_iters default (5). See also https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html * Update RELEASENOTES.md. --------- Co-authored-by: Masaru Kimura <[email protected]>
1 parent 5ba1402 commit 14e351f

File tree

2 files changed

+3
-1
lines changed

2 files changed

+3
-1
lines changed

Diff for: RELEASENOTES.md

+2
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ Releases, starting with 9/2/2021, are listed with the most recent release at the
66
__Bug Fixes__:
77

88
#1426 Sequential.eval() does not put model into eval mode<br/>
9+
`torch.optim.lr_scheduler.LinearLR` `end_factor` default has been corrected, is now 1.0.<br/>
10+
911
# NuGet Version 0.105.0
1012

1113
Move to libtorch 2.5.1. As with the 2.4.0 release, MacOS / Intel is no longer supported by libtorch, so TorchSharp doesn, either.

Diff for: src/TorchSharp/Optimizers/LRScheduler.cs

+1-1
Original file line numberDiff line numberDiff line change
@@ -1398,7 +1398,7 @@ public static LRScheduler SequentialLR(Optimizer optimizer, IEnumerable<LRSchedu
13981398
/// </param>
13991399
/// <param name="verbose">If true, prints a message to stdout for each update. Default: false.</param>
14001400
/// <returns>A scheduler</returns>
1401-
public static LRScheduler LinearLR(Optimizer optimizer, double start_factor = 1.0 / 3, double end_factor = 5, int total_iters = 5, int last_epoch = -1, bool verbose = false)
1401+
public static LRScheduler LinearLR(Optimizer optimizer, double start_factor = 1.0 / 3, double end_factor = 1.0, int total_iters = 5, int last_epoch = -1, bool verbose = false)
14021402
{
14031403
return new impl.LinearLR(optimizer, start_factor, end_factor, total_iters, last_epoch, verbose);
14041404
}

0 commit comments

Comments
 (0)