Skip to content

Commit 96fcc17

Browse files
jiangmencitypsychedelicious
authored andcommitted
chore: fix some comments
Signed-off-by: jiangmencity <[email protected]>
1 parent a4d58aa commit 96fcc17

File tree

6 files changed

+8
-8
lines changed

6 files changed

+8
-8
lines changed

docs/contributing/MODEL_MANAGER.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -265,7 +265,7 @@ If the key is unrecognized, this call raises an
265265

266266
#### exists(key) -> AnyModelConfig
267267

268-
Returns True if a model with the given key exists in the databsae.
268+
Returns True if a model with the given key exists in the database.
269269

270270
#### search_by_path(path) -> AnyModelConfig
271271

@@ -718,7 +718,7 @@ When downloading remote models is implemented, additional
718718
configuration information, such as list of trigger terms, will be
719719
retrieved from the HuggingFace and Civitai model repositories.
720720

721-
The probed values can be overriden by providing a dictionary in the
721+
The probed values can be overridden by providing a dictionary in the
722722
optional `config` argument passed to `import_model()`. You may provide
723723
overriding values for any of the model's configuration
724724
attributes. Here is an example of setting the
@@ -841,7 +841,7 @@ variable.
841841

842842
#### installer.start(invoker)
843843

844-
The `start` method is called by the API intialization routines when
844+
The `start` method is called by the API initialization routines when
845845
the API starts up. Its effect is to call `sync_to_config()` to
846846
synchronize the model record store database with what's currently on
847847
disk.

docs/contributing/contributors.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ We thank [all contributors](https://github.com/invoke-ai/InvokeAI/graphs/contrib
1616
- @psychedelicious (Spencer Mabrito) - Web Team Leader
1717
- @joshistoast (Josh Corbett) - Web Development
1818
- @cheerio (Mary Rogers) - Lead Engineer & Web App Development
19-
- @ebr (Eugene Brodsky) - Cloud/DevOps/Sofware engineer; your friendly neighbourhood cluster-autoscaler
19+
- @ebr (Eugene Brodsky) - Cloud/DevOps/Software engineer; your friendly neighbourhood cluster-autoscaler
2020
- @sunija - Standalone version
2121
- @brandon (Brandon Rising) - Platform, Infrastructure, Backend Systems
2222
- @ryanjdick (Ryan Dick) - Machine Learning & Training

invokeai/backend/quantization/scripts/load_flux_model_bnb_llm_int8.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ def main():
2020
"/data/invokeai/models/.download_cache/https__huggingface.co_black-forest-labs_flux.1-schnell_resolve_main_flux1-schnell.safetensors/flux1-schnell.safetensors"
2121
)
2222

23-
with log_time("Intialize FLUX transformer on meta device"):
23+
with log_time("Initialize FLUX transformer on meta device"):
2424
# TODO(ryand): Determine if this is a schnell model or a dev model and load the appropriate config.
2525
p = params["flux-schnell"]
2626

invokeai/backend/quantization/scripts/load_flux_model_bnb_nf4.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ def main():
3333
)
3434

3535
# inference_dtype = torch.bfloat16
36-
with log_time("Intialize FLUX transformer on meta device"):
36+
with log_time("Initialize FLUX transformer on meta device"):
3737
# TODO(ryand): Determine if this is a schnell model or a dev model and load the appropriate config.
3838
p = params["flux-schnell"]
3939

invokeai/backend/quantization/scripts/quantize_t5_xxl_bnb_llm_int8.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ def main():
2727
"""
2828
model_path = Path("/data/misc/text_encoder_2")
2929

30-
with log_time("Intialize T5 on meta device"):
30+
with log_time("Initialize T5 on meta device"):
3131
model_config = AutoConfig.from_pretrained(model_path)
3232
with accelerate.init_empty_weights():
3333
model = AutoModelForTextEncoding.from_config(model_config)

scripts/get_external_contributions.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ def fetch_commits_between_tags(
4646
commit_info: list[CommitInfo] = []
4747
headers = {"Authorization": f"token {token}"} if token else None
4848

49-
# Get the total number of pages w/ an intial request - a bit hacky but it works...
49+
# Get the total number of pages w/ an initial request - a bit hacky but it works...
5050
response = requests.get(
5151
f"https://api.github.com/repos/{org_name}/{repo_name}/compare/{from_ref}...{to_ref}?page=1&per_page=100",
5252
headers=headers,

0 commit comments

Comments
 (0)