Skip to content

Commit be6dfed

Browse files
Caozhou1995caozhou
and
caozhou
authored
[Core] Fix import error (#386)
To accommodate the scenario where the before_start field is used to switch to the actual environment during program execution, we have placed the import statements inside the function body rather than at the beginning of the file. Co-authored-by: caozhou <[email protected]>
1 parent e75d3d2 commit be6dfed

File tree

2 files changed

+5
-1
lines changed

2 files changed

+5
-1
lines changed

flagscale/runner/auto_tuner/memory_model.py

+2-1
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,10 @@
11
from flagscale.runner.auto_tuner.utils import convert_config_to_megatron_args
2-
from flagscale.train.theoretical_memory_usage import report_theoretical_memory
32

43

54
def default_model(strategy, config):
65
"""Use megatron built in memory model."""
6+
from flagscale.train.theoretical_memory_usage import report_theoretical_memory
7+
78
args = convert_config_to_megatron_args(config, strategy)
89
num_microbatches = (
910
config.train.model.global_batch_size

run.py

+3
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,9 @@
88
from flagscale.runner.runner_train import CloudTrainRunner, SSHTrainRunner
99
from flagscale.runner.utils import is_master
1010

11+
# To accommodate the scenario where the before_start field is used to switch to the actual environment during program execution,
12+
# we have placed the import statements inside the function body rather than at the beginning of the file.
13+
1114

1215
@hydra.main(version_base=None, config_name="config")
1316
def main(config: DictConfig) -> None:

0 commit comments

Comments
 (0)