You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We need an upgrade to the LMI image to incorporate the latest vLLM v0.7.2 so we can deploy a Qwen2.5-VL model. Is there any ongoing effort to make this happen?
I also have a general question: Are the build process and Dockerfiles for these images open-sourced? If so, where can I find them, and how can I contribute?
If they are not open-sourced, what is the recommended way to build on top of the existing LMI image to use the latest vLLM on my end?
I’ve tried upgrading vLLM and the Transformers library using the following:
Inference runs successfully, but the Qwen2.5-VL model only recognizes text capabilities and does not process or understand image inputs. Something still seems off.
Any guidance would be appreciated.
Thanks!
The text was updated successfully, but these errors were encountered:
Hi,
We need an upgrade to the LMI image to incorporate the latest vLLM v0.7.2 so we can deploy a Qwen2.5-VL model. Is there any ongoing effort to make this happen?
I also have a general question: Are the build process and Dockerfiles for these images open-sourced? If so, where can I find them, and how can I contribute?
If they are not open-sourced, what is the recommended way to build on top of the existing LMI image to use the latest vLLM on my end?
I’ve tried upgrading vLLM and the Transformers library using the following:
Inference runs successfully, but the Qwen2.5-VL model only recognizes text capabilities and does not process or understand image inputs. Something still seems off.
Any guidance would be appreciated.
Thanks!
The text was updated successfully, but these errors were encountered: