Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python库矛盾 #55

Open
errorduplicator opened this issue Jan 23, 2025 · 12 comments
Open

Python库矛盾 #55

errorduplicator opened this issue Jan 23, 2025 · 12 comments

Comments

@errorduplicator
Copy link

各相关版本信息

torch 2.5.1
transformers 4.48.1
Python 3.12.8 | packaged by Anaconda, Inc.
windows 10 1709 x64

使用Huggingface

# Use a pipeline as a high-level helper
from transformers import pipeline

messages = [
    {"role": "user", "content": "Who are you?"},
]
pipe = pipeline("text-generation", model="deepseek-ai/DeepSeek-R1", trust_remote_code=True)
pipe(messages)

错误信息

Traceback (most recent call last):
  File "my_main_script.py", line 7, in <module>
    pipe = pipeline("text-generation", model="deepseek-ai/DeepSeek-R1", trust_remote_code=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "my_python_path\Lib\site-packages\transformers\pipelines\__init__.py", line 940, in pipeline
    framework, model = infer_framework_load_model(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "my_python_path\Lib\site-packages\transformers\pipelines\base.py", line 289, in infer_framework_load_model
    model = model_class.from_pretrained(model, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "my_python_path\Lib\site-packages\transformers\models\auto\auto_factory.py", line 553, in from_pretrained
    model_class = get_class_from_dynamic_module(
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "my_python_path\Lib\site-packages\transformers\dynamic_module_utils.py", line 553, in get_class_from_dynamic_module
    return get_class_in_module(class_name, final_module, force_reload=force_download)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "my_python_path\Lib\site-packages\transformers\dynamic_module_utils.py", line 250, in get_class_in_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 999, in exec_module
  File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
  File "my_user_dir\.cache\huggingface\modules\transformers_modules\deepseek-ai\DeepSeek-R1\4dc77d4932316bdaa6f255ee7ad2ea3733a8ca23\modeling_deepseek.py", line 44, in <module>
    from transformers.pytorch_utils import (
ImportError: cannot import name 'is_torch_greater_or_equal_than_1_13' from 'transformers.pytorch_utils' (D:\Programs\anaconda3\envs\ai\Lib\site-packages\transformers\pytorch_utils.py). Did you mean: 'is_torch_greater_or_equal_than_2_1'?

问题主要出现于modeling_deepseek.py 第44~47行

from transformers.pytorch_utils import (
    ALL_LAYERNORM_LAYERS,
    is_torch_greater_or_equal_than_1_13,
)
@Huowuge
Copy link

Huowuge commented Jan 24, 2025

Same error, python version 3.12, windows 11.
I downgrade transformers version to 2.21 and python to 3.10, the error has disappeared, but a new one has appeared.
Need to install flash-attn, but no version can install successfully.

@zharry29
Copy link

Same issue!

@psychpsych
Copy link

Hey,

Check out the doc:

Hugging Face's Transformers has not been directly supported yet.**

@zharry29
Copy link

I got things to work using transformers==4.37.2 and flash_attn==1.0.5 (reference, reference). Hopefully the support for the latest version of Transformers would be implemented soon.

@quanq666
Copy link

Got the same issue: ImportError: cannot import name 'is_torch_greater_or_equal_than_1_13' from 'transformers.pytorch_utils'

@quanq666
Copy link

Got the same issue: ImportError: cannot import name 'is_torch_greater_or_equal_than_1_13' from 'transformers.pytorch_utils'

Updated to is_torch_greater_or_equal_than_2_1 works for me.

@ruidazeng
Copy link

Got the same issue: ImportError: cannot import name 'is_torch_greater_or_equal_than_1_13' from 'transformers.pytorch_utils'

Updated to is_torch_greater_or_equal_than_2_1 works for me.

Which line did you fix it in?

@ruidazeng
Copy link

I got things to work using transformers==4.37.2 and flash_attn==1.0.5 (reference, reference). Hopefully the support for the latest version of Transformers would be implemented soon.

Did you fix the unknown quantization type, got fp8 issue?

@Jiadalee
Copy link

Got the same issue: ImportError: cannot import name 'is_torch_greater_or_equal_than_1_13' from 'transformers.pytorch_utils'

Updated to is_torch_greater_or_equal_than_2_1 works for me.

Which line did you fix it in?

I fixed it. but as you mentioned, I also got the unknown quantization type, got fp8 issue

@Jiadalee
Copy link

Got the same issue: ImportError: cannot import name 'is_torch_greater_or_equal_than_1_13' from 'transformers.pytorch_utils'

Updated to is_torch_greater_or_equal_than_2_1 works for me.

pls change both lines 46 and 69 from _1_13 to _2_1

@quanq666
Copy link

Got the same issue: ImportError: cannot import name 'is_torch_greater_or_equal_than_1_13' from 'transformers.pytorch_utils'

Updated to is_torch_greater_or_equal_than_2_1 works for me.

Which line did you fix it in?

I fixed it. but as you mentioned, I also got the unknown quantization type, got fp8 issue

Yeah I also got the fp8 exception, I think that's the default model loading from deepseek config.
Since I tried to run it on my Macbook (M1), so I need to override the quantization configuration during model loading. -> set device_map="auto"

@Jiadalee
Copy link

Got the same issue: ImportError: cannot import name 'is_torch_greater_or_equal_than_1_13' from 'transformers.pytorch_utils'

Updated to is_torch_greater_or_equal_than_2_1 works for me.

Which line did you fix it in?

I fixed it. but as you mentioned, I also got the unknown quantization type, got fp8 issue

Yeah I also got the fp8 exception, I think that's the default model loading from deepseek config. Since I tried to run it on my Macbook (M1), so I need to override the quantization configuration during model loading. -> set device_map="auto"

@quanq666 you mean you fixed the fp8 issue by setting device_map="auto"?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants