Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run-locally-issues #9

Open
Aliktk opened this issue Dec 16, 2024 · 0 comments
Open

run-locally-issues #9

Aliktk opened this issue Dec 16, 2024 · 0 comments

Comments

@Aliktk
Copy link

Aliktk commented Dec 16, 2024

@Asma-Alkhaldi need help to run the repo locally.

I am trying it to setup it installed everything.
During clonning the Llama-2-7b-chat-hf is it important to clone inside the repo or outside? or should I only download the checkpoints and give path in the llama_model: "/miniGPT-Med/llama-2-7b-chat-hf"?

also share ur tips for this?
Can I run it without GPU? I have limited setup for now.

Thank you for your help

I tried w/o cloning llama7b:

(miniGPT-Med) D:\All Projects\MiniGPT-Med>python demo_v2.py --cfg-path eval_configs/minigptv2_eval.yaml --gpu-id 0

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
CUDA SETUP: Required library version not found: libsbitsandbytes_cpu.so. Maybe you need to compile it from source?
CUDA SETUP: Defaulting to libbitsandbytes_cpu.so...
argument of type 'WindowsPath' is not iterable
C:\Users\Nawaz\anaconda3\envs\miniGPT-Med\lib\site-packages\bitsandbytes\cextension.py:31: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable.
  warn("The installed version of bitsandbytes was compiled without GPU support. "
Initializing Chat
Traceback (most recent call last):
  File "D:\All Projects\MiniGPT-Med\demo_v2.py", line 63, in <module>
    model = model_cls.from_config(model_config).to(device)
  File "D:\All Projects\MiniGPT-Med\minigpt4\models\minigpt_v2.py", line 114, in from_config
    model = cls(
  File "D:\All Projects\MiniGPT-Med\minigpt4\models\minigpt_v2.py", line 46, in __init__
    super().__init__(
  File "D:\All Projects\MiniGPT-Med\minigpt4\models\minigpt_base.py", line 41, in __init__
    self.llama_model, self.llama_tokenizer = self.init_llm(
  File "D:\All Projects\MiniGPT-Med\minigpt4\models\base_model.py", line 174, in init_llm
    llama_tokenizer = LlamaTokenizer.from_pretrained(llama_model_path, use_fast=False)
  File "C:\Users\Nawaz\anaconda3\envs\miniGPT-Med\lib\site-packages\transformers\tokenization_utils_base.py", line 1784, in from_pretrained   
    resolved_vocab_files[file_id] = cached_file(
  File "C:\Users\Nawaz\anaconda3\envs\miniGPT-Med\lib\site-packages\transformers\utils\hub.py", line 417, in cached_file
    resolved_file = hf_hub_download(
  File "C:\Users\Nawaz\anaconda3\envs\miniGPT-Med\lib\site-packages\huggingface_hub\utils\_validators.py", line 110, in _inner_fn
    validate_repo_id(arg_value)
  File "C:\Users\Nawaz\anaconda3\envs\miniGPT-Med\lib\site-packages\huggingface_hub\utils\_validators.py", line 158, in validate_repo_id      
    raise HFValidationError(
huggingface_hub.utils._validators.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/miniGPT-Med/llama-2-7b-chat-hf'. Use `repo_type` argument if needed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant