Skip to content

Commit f405483

Browse files
echarlaixhelena-intelpcuencaezelanzanikita-savelyevv
authored
Add openvino VLM blog post (#3071)
* add openvino VLM blog post * Update openvino-vlm.md Co-authored-by: Helena Kloosterman <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Update openvino-vlm.md * Update openvino-vlm.md Co-authored-by: Pedro Cuenca <[email protected]> * Add benchmark (#3076) * rephrase * apply comment * add author * fix typo * rephrase intro * rephrase * rephrase * typo * typo * typo * remove smolvlm image * remove vlm section * rephrase * rephrase * rephrase * typo * add space * Update openvino-vlm.md Co-authored-by: Eze Lanza (Eze) <[email protected]> * fix benchmark table * move prefill and decoder column closer * add pytorch model * remove first_generate latency * fix table * update metrics * apply comment * fix typo * remove redundant introduction first paragraph * add post training quantzation doc links * highlight dynamic quantization in note * fix * update benchmark section * rephrase * add links to model * update static quantization config * Update openvino-vlm.md Co-authored-by: Nikita Savelyev <[email protected]> * Update openvino-vlm.md Co-authored-by: Nikita Savelyev <[email protected]> * fix title * Update openvino-vlm.md Co-authored-by: Eze Lanza (Eze) <[email protected]> * Update openvino-vlm.md Co-authored-by: Eze Lanza (Eze) <[email protected]> * add as note * add as note * add comment * Update openvino-vlm.md Co-authored-by: Eze Lanza (Eze) <[email protected]> * fix link * add speedup * fix date --------- Co-authored-by: Helena Kloosterman <[email protected]> Co-authored-by: Pedro Cuenca <[email protected]> Co-authored-by: Eze Lanza (Eze) <[email protected]> Co-authored-by: Nikita Savelyev <[email protected]>
1 parent ebb5e8a commit f405483

File tree

2 files changed

+208
-0
lines changed

2 files changed

+208
-0
lines changed

_blog.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6787,3 +6787,14 @@
67876787
tags:
67886788
- coreml
67896789
- apple
6790+
6791+
- local: openvino-vlm
6792+
title: "Get your VLM running in 3 simple steps on Intel CPUs"
6793+
author: ezelanza
6794+
thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png
6795+
date: Oct 13, 2025
6796+
tags:
6797+
- intel
6798+
- optimum
6799+
- quantization
6800+
- inference

openvino-vlm.md

Lines changed: 197 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,197 @@
1+
---
2+
title: "Get your VLM running in 3 simple steps on Intel CPUs"
3+
thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png
4+
authors:
5+
- user: ezelanza
6+
guest: true
7+
org: Intel
8+
- user: helenai
9+
guest: true
10+
org: Intel
11+
- user: nikita-savelyev-intel
12+
guest: true
13+
org: Intel
14+
- user: echarlaix
15+
- user: IlyasMoutawwakil
16+
---
17+
18+
# Get your VLM running in 3 simple steps on Intel CPUs
19+
20+
With the growing capability of large language models (LLMs), a new class of models has emerged: [Vision Language Models (VLMs)](https://huggingface.co/blog/vlms-2025). These models can analyze images and videos to describe scenes, create captions, and answer questions about visual content.
21+
22+
While running AI models on your own device can be difficult as these models are often computationally demanding, it also offers significant benefits: including improved privacy since your data stays on your machine, and enhanced speed and reliability because you're not dependent on an internet connection or external servers. This is where tools like [Optimum Intel](https://huggingface.co/docs/optimum-intel/en/index) and [OpenVINO](https://docs.openvino.ai/2025/index.html) come in, along with a small, efficient model like [SmolVLM](https://huggingface.co/blog/smolvlm). In this blog post, we'll walk you through three easy steps to get a VLM running locally, with no expensive hardware or GPUs required (though you can run all the code samples from this blog post on Intel GPUs).
23+
24+
25+
## Deploy your model with Optimum
26+
27+
Small models like SmolVLM are built for low-resource consumption, but they can be further optimized. In this blog post we will see how to optimize your model, to lower memory usage and speedup inference, making it more efficient for deployment on devices with limited resources.
28+
29+
To follow this tutorial, you need to install `optimum` and `openvino`, which you can do with:
30+
31+
```bash
32+
pip install optimum-intel[openvino] transformers==4.52.*
33+
```
34+
35+
## Step 1: Convert your model
36+
37+
First, you will need to convert your model to the [OpenVINO IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html). There are multiple options to do it:
38+
39+
1. You can use the [Optimum CLI](https://huggingface.co/docs/optimum-intel/en/openvino/export#using-the-cli)
40+
41+
```bash
42+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct smolvlm_ov/
43+
```
44+
45+
2. Or you can convert it [on the fly](https://huggingface.co/docs/optimum-intel/en/openvino/export#when-loading-your-model) when loading your model:
46+
47+
```python
48+
from optimum.intel import OVModelForVisualCausalLM
49+
50+
model_id = "HuggingFaceTB/SmolVLM2-256M-Video-Instruct"
51+
model = OVModelForVisualCausalLM.from_pretrained(model_id)
52+
model.save_pretrained("smolvlm_ov")
53+
```
54+
55+
## Step 2: Quantization
56+
57+
Now it’s time to optimize your model. Quantization reduces the precision of the model weights and/or activations, leading to smaller, faster models. Essentially, it's a way to map values from a high-precision data type, such as 32-bit floating-point numbers (FP32), to a lower-precision format, typically 8-bit integers (INT8). While this process offers several key benefits, it can also impact in a potential loss of accuracy.
58+
59+
<p align="center">
60+
<img src="https://huggingface.co/datasets/OpenVINO/documentation/resolve/main/blog/openvino_vlm/quantization.png" alt="Quantization" width="700"/>
61+
</p>
62+
63+
Optimum supports two main post-training quantization methods:
64+
65+
- [Weight Only Quantization (WOQ)](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#weight-only-quantization)
66+
- [Static Quantization](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#full-quantization)
67+
68+
Let’s explore each of them.
69+
70+
### Option 1: Weight Only Quantization
71+
72+
Weight-only quantization means that only the weights are quantized but activations remain in their original precisions. As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited. Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.
73+
74+
> [!NOTE]
75+
> Since OpenVINO 2024.3, if the model's weight have been quantized, the corresponding activations will also be quantized at runtime, leading to additional speedup depending on the device.
76+
77+
In order to run it, you will need to create a quantization configuration `OVWeightQuantizationConfig` as follows:
78+
79+
```python
80+
from optimum.intel import OVModelForVisualCausalLM, OVWeightQuantizationConfig
81+
82+
q_config = OVWeightQuantizationConfig(bits=8)
83+
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
84+
q_model.save_pretrained("smolvlm_int8")
85+
```
86+
87+
or equivalently using the CLI:
88+
89+
```bash
90+
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct --weight-format int8 smolvlm_int8/
91+
```
92+
93+
## Option 2: Static Quantization
94+
95+
With Static Quantization, both weights and activations are quantized before inference. To achieve the best estimate for the activation quantization parameters, we perform a calibration step. During this step, a small representative dataset is fed through the model. In our case, we will use 50 samples of the [contextual dataset](https://huggingface.co/datasets/ucla-contextual/contextual_test) and will apply static quantization on the vision encoder while weight-only quantization will be applied on the rest of the model. Experiments show that applying static quantization on the vision encoder provides a noticeable performance improvement without significant accuracy degradation. Since the vision encoder is called only once per generation, the overall performance gain from applying static quantization on this component is lower than the gain achieved by optimizing more frequently used components like the language model. Nevertheless, this approach can be beneficial in certain scenarios. For example, when short answers are needed, especially with multiple images as input.
96+
97+
```python
98+
from optimum.intel import OVModelForVisualCausalLM, OVPipelineQuantizationConfig, OVQuantizationConfig, OVWeightQuantizationConfig
99+
100+
q_config = OVPipelineQuantizationConfig(
101+
quantization_configs={
102+
"lm_model": OVWeightQuantizationConfig(bits=8),
103+
"text_embeddings_model": OVWeightQuantizationConfig(bits=8),
104+
"vision_embeddings_model": OVQuantizationConfig(bits=8),
105+
},
106+
dataset=dataset,
107+
num_samples=num_samples,
108+
)
109+
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
110+
q_model.save_pretrained("smolvlm_static_int8")
111+
```
112+
113+
Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
114+
115+
### Step 3: Run inference
116+
117+
You can now run inference with your quantized model:
118+
119+
```python
120+
generated_ids = q_model.generate(**inputs, max_new_tokens=100)
121+
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
122+
print(generated_texts[0])
123+
```
124+
125+
If you have a recent Intel laptop, Intel AI PC, or Intel discrete GPU, you can load the model on GPU by adding `device="gpu"` when loading your model:
126+
127+
```python
128+
model = OVModelForVisualCausalLM.from_pretrained(model_id, device="gpu")
129+
```
130+
131+
We also created a [space](https://huggingface.co/spaces/echarlaix/vision-langage-openvino) so you can play with the [original model](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino) and its quantized variants obtained by respectively applying [weight-only quantization](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino-8bit-woq-data-free) and [mixed quantization](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino-8bit-mixed). This demo runs on 4th Generation Intel Xeon (Sapphire Rapids) processors.
132+
133+
134+
<p align="center">
135+
<img src="https://huggingface.co/datasets/OpenVINO/documentation/resolve/main/blog/openvino_vlm/chat1.png" alt=" HF Space" width="500"/>
136+
</p>
137+
138+
To reproduce our results, check out our [notebook](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).
139+
140+
## Evaluation and Conclusion
141+
142+
We ran a benchmark to compare the performance of the [PyTorch](https://huggingface.co/HuggingFaceTB/SmolVLM2-256M-Video-Instruct), [OpenVINO](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino), and [OpenVINO 8-bit WOQ](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino-8bit-woq-data-free) versions of the original model. The goal was to evaluate the impact of weight-only quantization on latency and throughput on Intel CPU hardware. For this test, we used [a single image](https://huggingface.co/datasets/OpenVINO/documentation/resolve/main/blog/openvino_vlm/flower.png) as input.
143+
144+
We measured the following metrics to evaluate the model's performance:
145+
- Time To First Token (TTFT) : Time it takes to generate the first output token.
146+
- Time Per Output Token (TPOT): Time it takes to generate each subsequent output tokens.
147+
- End-to-End Latency : Total time it takes to generate the output all output tokens.
148+
- Decoding Throughput: Number of tokens per second the model generates during the decoding phase.
149+
150+
Here are the results on Intel CPU:
151+
152+
| Configuration |Time To First Token (TTFT)|Time Per Output Token (TPOT)| End-to-End Latency | Decoding Throughput |
153+
|------------------|--------------------------|----------------------------|-----------------------|-------------------------------|
154+
| pytorch | 5.150 | 1.385 | 25.927 | 0.722 |
155+
| openvino | 0.420 | 0.021 | 0.738 | 47.237 |
156+
| openvino-8bit-woq| 0.247 | 0.016 | 0.482 | 63.928 |
157+
158+
159+
This benchmark demonstrates how small, optimized multimodal models, like [SmolVLM2-256M](https://huggingface.co/HuggingFaceTB/SmolVLM2-256M-Video-Instruct), perform on Intel CPUs across different configurations. According to the tests, the PyTorch version shows high latency, with a time to first token (TTFT) of over 5s with a decoding throughput of ~0.7 tokens/s. Simply converting the model with Optimum and running it on OpenVINO drastically reduces the time to first token (TTFT) to 0.42s (~x12 speedup) and raises throughput to ~47 tokens/s (~x65). Applying 8-bit weight-only quantization further reduces TTFT (x1.7) and increases throughput (x1.4), while also reducing model size and improving efficiency.
160+
161+
> [!NOTE]
162+
> **Platform configuration**
163+
> Platform Configuration for performance claims above:
164+
>
165+
> **System Board:** MSI B860M GAMING PLUS WIFI (MS-7E42)
166+
> **CPU:** Intel® Core™ Ultra 7 265K
167+
> **Sockets/Physical Cores:** 1/20 (20 threads)
168+
> **HyperThreading/Turbo Settings:** Disabled
169+
> **Memory:** 64 GB DDR5 @ 6400 MHz
170+
> **TDP:** 665W
171+
> **BIOS:** American Megatrends International, LLC. 2.A10
172+
> **BIOS Release Date:** 28.11.2024
173+
> **OS:** Ubuntu 24.10
174+
> **Kernel:** 6.11.0–25-generic
175+
> **OpenVINO Version:** 2025.2.0
176+
> **torch:** 2.8.0
177+
> **torchvision:** 0.23.0+cpu
178+
> **optimum-intel:** 1.25.2
179+
> **transformers:** 4.53.3
180+
> **Benchmark Date:** 15.05.2025
181+
> **Benchmarked by:** Intel Corporation
182+
> Performance may vary by use, configuration, and other factors. See the platform configuration below.
183+
184+
185+
## Useful Links & Resources
186+
187+
- [Notebook](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb)
188+
- [Try our Space](https://huggingface.co/spaces/echarlaix/vision-langage-openvino)
189+
- [Watch the webinar recording](https://web.cvent.com/event/d550a2a7-04f2-4a28-b641-3af228e318ca/regProcessStep1?utm_campaign=speakers4&utm_medium=organic&utm_source=Community)
190+
- [Optimum Intel Documentation](https://huggingface.co/docs/optimum-intel/en/openvino/inference)
191+
192+
193+
> [!NOTE]
194+
> ## Notices & Disclaimers
195+
> Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
196+
> Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation.
197+
> © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.

0 commit comments

Comments
 (0)