Skip to content

Commit 580316f

Browse files
authored
Fix instructions for changing demo precision
1 parent ea0e81a commit 580316f

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -101,8 +101,8 @@ python demo.py --cfg-path eval_configs/minigpt4_llama2_eval.yaml --gpu-id 0
101101
To save GPU memory, LLMs loads as 8 bit by default, with a beam search width of 1.
102102
This configuration requires about 23G GPU memory for 13B LLM and 11.5G GPU memory for 7B LLM.
103103
For more powerful GPUs, you can run the model
104-
in 16 bit by setting low_resource to False in the config file
105-
[minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml) and use a larger beam search width.
104+
in 16 bit by setting `low_resource` to `False` in the relevant config file
105+
(line 6 of either [minigpt4_eval.yaml](eval_configs/minigpt4_eval.yaml#6) if using Vicuna or [minigpt4_llama2_eval.yaml](eval_configs/minigpt4_llama2_eval.yaml#6) if using Llama 2) and use a larger beam search width.
106106

107107
Thanks [@WangRongsheng](https://github.com/WangRongsheng), you can also run our code on [Colab](https://colab.research.google.com/drive/1OK4kYsZphwt5DXchKkzMBjYF6jnkqh4R?usp=sharing)
108108

0 commit comments

Comments
 (0)