Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: "slow_conv2d_cpu" not implemented for 'Half' #121

Open
algfwq opened this issue Feb 1, 2025 · 4 comments
Open

RuntimeError: "slow_conv2d_cpu" not implemented for 'Half' #121

algfwq opened this issue Feb 1, 2025 · 4 comments

Comments

@algfwq
Copy link

algfwq commented Feb 1, 2025

I use CPU.
There is a error.

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Traceback (most recent call last):
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\routes.py", line 534, in predict
output = await route_utils.call_process_api(
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\miniconda4\envs\janus\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:\miniconda4\envs\janus\lib\site-packages\anyio_backends_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "D:\miniconda4\envs\janus\lib\site-packages\anyio_backends_asyncio.py", line 962, in run
result = context.run(func, *args)
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\utils.py", line 661, in wrapper
response = f(*args, **kwargs)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\janus\Janus\demo\app_januspro.py", line 62, in multimodal_understanding
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
File "D:\janus\Janus\janus\models\modeling_vlm.py", line 246, in prepare_inputs_embeds
images_embeds = self.aligner(self.vision_model(images))
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\janus\Janus\janus\models\clip_encoder.py", line 120, in forward
image_forward_outs = self.vision_tower(images, **self.forward_kwargs)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\janus\Janus\janus\models\siglip_vit.py", line 586, in forward
x = self.forward_features(x)
File "D:\janus\Janus\janus\models\siglip_vit.py", line 563, in forward_features
x = self.patch_embed(x)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\miniconda4\envs\janus\lib\site-packages\timm\layers\patch_embed.py", line 131, in forward
x = self.proj(x)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'

完整内容:
(janus) D:\janus\Janus>python demo/app_januspro.py
Python version is above 3.10, patching the collections module.
D:\miniconda4\envs\janus\lib\site-packages\torchvision\datapoints_init_.py:12: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: pytorch/vision#6753, and you can also check out pytorch/vision#7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
warnings.warn(BETA_TRANSFORMS_WARNING)
D:\miniconda4\envs\janus\lib\site-packages\torchvision\transforms\v2_init
.py:54: UserWarning: The torchvision.datapoints and torchvision.transforms.v2 namespaces are still Beta. While we do not expect major breaking changes, some APIs may still change according to user feedback. Please submit any feedback you may have in this issue: pytorch/vision#6753, and you can also check out pytorch/vision#7319 to learn more about the APIs that we suspect might involve future changes. You can silence this warning by calling torchvision.disable_beta_transforms_warning().
warnings.warn(_BETA_TRANSFORMS_WARNING)
D:\miniconda4\envs\janus\lib\site-packages\transformers\models\auto\image_processing_auto.py:590: FutureWarning: The image_processor_class argument is deprecated and will be removed in v4.42. Please use slow_image_processor_class, or fast_image_processor_class instead
warnings.warn(
D:\miniconda4\envs\janus\lib\site-packages\torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Using a slow image processor as use_fast is unset and a slow processor was saved with this model. use_fast=True will be the default behavior in v4.48, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with use_fast=False.
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama_fast.LlamaTokenizerFast'>. This is expected, and simply means tha
t the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you und
erstand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565 - if you loaded a llama tokenizer from a GGUF file you can ignore this message.
Some kwargs in processor config are unused and will not have any effect: add_special_token, num_image_tokens, sft_format, ignore_id, image_tag, mask_prompt.
Running on local URL: http://127.0.0.1:7860
IMPORTANT: You are using gradio version 3.48.0, however version 4.44.1 is available, please upgrade.

Running on public URL: https://602c81a57fc72a5900.gradio.live

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces)
Traceback (most recent call last):
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\routes.py", line 534, in predict
output = await route_utils.call_process_api(
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\blocks.py", line 1550, in process_api
result = await self.call_function(
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\miniconda4\envs\janus\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "D:\miniconda4\envs\janus\lib\site-packages\anyio_backends_asyncio.py", line 2461, in run_sync_in_worker_thread
return await future
File "D:\miniconda4\envs\janus\lib\site-packages\anyio_backends_asyncio.py", line 962, in run
result = context.run(func, *args)
File "D:\miniconda4\envs\janus\lib\site-packages\gradio\utils.py", line 661, in wrapper
response = f(*args, **kwargs)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\janus\Janus\demo\app_januspro.py", line 62, in multimodal_understanding
inputs_embeds = vl_gpt.prepare_inputs_embeds(**prepare_inputs)
File "D:\janus\Janus\janus\models\modeling_vlm.py", line 246, in prepare_inputs_embeds
images_embeds = self.aligner(self.vision_model(images))
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\janus\Janus\janus\models\clip_encoder.py", line 120, in forward
image_forward_outs = self.vision_tower(images, **self.forward_kwargs)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\janus\Janus\janus\models\siglip_vit.py", line 586, in forward
x = self.forward_features(x)
File "D:\janus\Janus\janus\models\siglip_vit.py", line 563, in forward_features
x = self.patch_embed(x)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\miniconda4\envs\janus\lib\site-packages\timm\layers\patch_embed.py", line 131, in forward
x = self.proj(x)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "D:\miniconda4\envs\janus\lib\site-packages\torch\nn\modules\conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: "slow_conv2d_cpu" not implemented for 'Half'

@Vaderpucong
Copy link

need to install torch with cuda
pip install torch==2.0.1+cu117 --index-url https://download.pytorch.org/whl/cu117

@algfwq
Copy link
Author

algfwq commented Feb 2, 2025

need to install torch with cuda pip install torch==2.0.1+cu117 --index-url https://download.pytorch.org/whl/cu117

No. It's not important.
I have solved this problem.
We can change "float16" to "float32".

@LIRUILONGS
Copy link

LIRUILONGS commented Feb 2, 2025

确认下使用的CPU 还是GPU

cuda_device = 'cuda' if torch.cuda.is_available() else 'cpu'
print("使用了那个: ",cuda_device)

如果用 GPU 跑

PS C:\Users\Administrator\Documents\GitHub\Janus> pip uninstall torch                                                              
WARNING: Skipping torch as it is not installed.
PS C:\Users\Administrator\Documents\GitHub\Janus> pip install torch==2.2.2+cu118 --index-url https://download.pytorch.org/whl/cu118
Looking in indexes: https://download.pytorch.org/whl/cu118
Collecting torch==2.2.2+cu118
  Downloading https://download.pytorch.org/whl/cu118/torch-2.2.2%2Bcu118-cp310-cp310-win_amd64.whl (2704.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.7/2.7 GB 10.1 MB/s eta 0:00:00

如果没有显卡,只用 CPU 跑,需要修改这两个地方

   if torch.cuda.is_available():
    #vl_gpt = vl_gpt.to(torch.bfloat16).cuda()
    vl_gpt = vl_gpt.to(torch.bfloat32).cuda()
else:
    #vl_gpt = vl_gpt.to(torch.float16)
    vl_gpt = vl_gpt.to(torch.float32)
    pil_images = [Image.fromarray(image)]
    prepare_inputs = vl_chat_processor(
        conversations=conversation, images=pil_images, force_batchify=True
    #).to(cuda_device, dtype=torch.bfloat16 if torch.cuda.is_available() else torch.float16)
    ).to(cuda_device, dtype=torch.bfloat32 if torch.cuda.is_available() else torch.float32)

@bieyl
Copy link

bieyl commented Feb 3, 2025

@algfwq run python demo/app_januspro.py, the URL interface keeps loading?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants