Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
15d3828
add openvino VLM blog post
echarlaix Sep 12, 2025
9cc1717
Update openvino-vlm.md
echarlaix Sep 16, 2025
15f6f5f
Update openvino-vlm.md
echarlaix Sep 16, 2025
d6033a8
Update openvino-vlm.md
echarlaix Sep 16, 2025
cfbcca1
Update openvino-vlm.md
echarlaix Sep 16, 2025
6441337
Update openvino-vlm.md
echarlaix Sep 16, 2025
a6ee9d9
Update openvino-vlm.md
echarlaix Sep 16, 2025
c527b8b
Update openvino-vlm.md
echarlaix Sep 16, 2025
042ae0f
Update openvino-vlm.md
echarlaix Sep 16, 2025
e69c8ea
Update openvino-vlm.md
echarlaix Sep 16, 2025
69f09cc
Update openvino-vlm.md
echarlaix Sep 16, 2025
be7aef2
Update openvino-vlm.md
echarlaix Sep 16, 2025
d2523c0
Update openvino-vlm.md
echarlaix Sep 16, 2025
d17cf36
Add benchmark (#3076)
ezelanza Sep 16, 2025
cfda70f
rephrase
echarlaix Sep 16, 2025
47c9baf
apply comment
echarlaix Sep 16, 2025
6ae3d81
add author
echarlaix Sep 16, 2025
e3e410e
fix typo
echarlaix Sep 18, 2025
bcd87da
rephrase intro
echarlaix Sep 18, 2025
18ae0ce
rephrase
echarlaix Sep 18, 2025
da03f28
rephrase
echarlaix Sep 18, 2025
05e2a60
typo
echarlaix Sep 18, 2025
7e76c89
typo
echarlaix Sep 18, 2025
9214619
typo
echarlaix Sep 18, 2025
9250d8b
remove smolvlm image
echarlaix Sep 18, 2025
137beb3
remove vlm section
echarlaix Sep 18, 2025
3fd228f
rephrase
echarlaix Sep 18, 2025
6badb73
rephrase
echarlaix Sep 18, 2025
78a3fd6
rephrase
echarlaix Sep 18, 2025
bb6296d
typo
echarlaix Sep 18, 2025
481fddb
add space
echarlaix Sep 18, 2025
80a6000
Update openvino-vlm.md
echarlaix Oct 2, 2025
7e37d5d
fix benchmark table
echarlaix Oct 2, 2025
fb2de47
move prefill and decoder column closer
echarlaix Oct 3, 2025
23c40bc
add pytorch model
echarlaix Oct 3, 2025
77ea0cd
remove first_generate latency
echarlaix Oct 3, 2025
d027179
fix table
echarlaix Oct 3, 2025
8fc6928
update metrics
echarlaix Oct 3, 2025
57f4a34
apply comment
echarlaix Oct 7, 2025
27ff34a
fix typo
echarlaix Oct 7, 2025
3779587
remove redundant introduction first paragraph
echarlaix Oct 7, 2025
34c1612
add post training quantzation doc links
echarlaix Oct 7, 2025
078bf4e
highlight dynamic quantization in note
echarlaix Oct 7, 2025
f026273
fix
echarlaix Oct 7, 2025
7b7eb57
update benchmark section
echarlaix Oct 7, 2025
43c2a52
rephrase
echarlaix Oct 7, 2025
e79375c
add links to model
echarlaix Oct 7, 2025
da05b4e
update static quantization config
echarlaix Oct 8, 2025
4a6c6b6
Update openvino-vlm.md
echarlaix Oct 9, 2025
60aff81
Update openvino-vlm.md
echarlaix Oct 10, 2025
f2d302a
merge main
echarlaix Oct 10, 2025
5cd869c
fix title
echarlaix Oct 10, 2025
107947c
Update openvino-vlm.md
echarlaix Oct 10, 2025
caeb255
Update openvino-vlm.md
echarlaix Oct 10, 2025
29fbb32
add as note
echarlaix Oct 10, 2025
cf81b66
add as note
echarlaix Oct 10, 2025
efb7cbf
add comment
echarlaix Oct 10, 2025
2713e55
Update openvino-vlm.md
echarlaix Oct 13, 2025
b6c88dc
fix link
echarlaix Oct 13, 2025
069af77
add speedup
echarlaix Oct 13, 2025
96a7b76
fix date
echarlaix Oct 13, 2025
a87e919
typo
echarlaix Oct 13, 2025
ee29381
Update openvino-vlm.md
echarlaix Oct 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions _blog.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6787,3 +6787,14 @@
tags:
- coreml
- apple

- local: openvino-vlm
title: "Get your VLM running in 3 simple steps on Intel CPUs"
author: ezelanza
thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png
date: Oct 13, 2025
tags:
- intel
- optimum
- quantization
- inference
197 changes: 197 additions & 0 deletions openvino-vlm.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
---
title: "Get your VLM running in 3 simple steps on Intel CPUs"
thumbnail: /blog/assets/optimum_intel/intel_thumbnail.png
authors:
- user: ezelanza
guest: true
org: Intel
- user: helenai
guest: true
org: Intel
- user: nikita-savelyev-intel
guest: true
org: Intel
- user: echarlaix
- user: IlyasMoutawwakil
---

# Get your VLM running in 3 simple steps on Intel CPUs

With the growing capability of large language models (LLMs), a new class of models has emerged: [Vision Language Models (VLMs)](https://huggingface.co/blog/vlms-2025). These models can analyze images and videos to describe scenes, create captions, and answer questions about visual content.

While running AI models on your own device can be difficult as these models are often computationally demanding, it also offers significant benefits: including improved privacy since your data stays on your machine, and enhanced speed and reliability because you're not dependent on an internet connection or external servers. This is where tools like [Optimum Intel](https://huggingface.co/docs/optimum-intel/en/index) and [OpenVINO](https://docs.openvino.ai/2025/index.html) come in, along with a small, efficient model like [SmolVLM](https://huggingface.co/blog/smolvlm). In this blog post, we'll walk you through three easy steps to get a VLM running locally, with no expensive hardware or GPUs required (though you can run all the code samples from this blog post on Intel GPUs).


## Deploy your model with Optimum

Small models like SmolVLM are built for low-resource consumption, but they can be further optimized. In this blog post we will see how to optimize your model, to lower memory usage and speedup inference, making it more efficient for deployment on devices with limited resources.

To follow this tutorial, you need to install `optimum` and `openvino`, which you can do with:

```bash
pip install optimum-intel[openvino] transformers==4.52.*
```

## Step 1: Convert your model

First, you will need to convert your model to the [OpenVINO IR](https://docs.openvino.ai/2025/documentation/openvino-ir-format.html). There are multiple options to do it:

1. You can use the [Optimum CLI](https://huggingface.co/docs/optimum-intel/en/openvino/export#using-the-cli)

```bash
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct smolvlm_ov/
```

2. Or you can convert it [on the fly](https://huggingface.co/docs/optimum-intel/en/openvino/export#when-loading-your-model) when loading your model:

```python
from optimum.intel import OVModelForVisualCausalLM

model_id = "HuggingFaceTB/SmolVLM2-256M-Video-Instruct"
model = OVModelForVisualCausalLM.from_pretrained(model_id)
model.save_pretrained("smolvlm_ov")
```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to establish a reference baseline about speed/memory at this point?


## Step 2: Quantization

Now it’s time to optimize your model. Quantization reduces the precision of the model weights and/or activations, leading to smaller, faster models. Essentially, it's a way to map values from a high-precision data type, such as 32-bit floating-point numbers (FP32), to a lower-precision format, typically 8-bit integers (INT8). While this process offers several key benefits, it can also impact in a potential loss of accuracy.

<p align="center">
<img src="https://huggingface.co/datasets/OpenVINO/documentation/resolve/main/blog/openvino_vlm/quantization.png" alt="Quantization" width="700"/>
</p>

Optimum supports two main post-training quantization methods:

- [Weight Only Quantization (WOQ)](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#weight-only-quantization)
- [Static Quantization](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#full-quantization)

Let’s explore each of them.

### Option 1: Weight Only Quantization

Weight-only quantization means that only the weights are quantized but activations remain in their original precisions. As a result, the model becomes smaller and more memory-efficient, improving loading times. But since activations are not quantized, inference speed gains are limited. Weight-only quantization is a simple first step since it usually doesn’t result in significant accuracy degradation.

> [!NOTE]
> Since OpenVINO 2024.3, if the model's weight have been quantized, the corresponding activations will also be quantized at runtime, leading to additional speedup depending on the device.

In order to run it, you will need to create a quantization configuration `OVWeightQuantizationConfig` as follows:

```python
from optimum.intel import OVModelForVisualCausalLM, OVWeightQuantizationConfig

q_config = OVWeightQuantizationConfig(bits=8)
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
q_model.save_pretrained("smolvlm_int8")
```

or equivalently using the CLI:

```bash
optimum-cli export openvino -m HuggingFaceTB/SmolVLM2-256M-Video-Instruct --weight-format int8 smolvlm_int8/
```

## Option 2: Static Quantization

With Static Quantization, both weights and activations are quantized before inference. To achieve the best estimate for the activation quantization parameters, we perform a calibration step. During this step, a small representative dataset is fed through the model. In our case, we will use 50 samples of the [contextual dataset](https://huggingface.co/datasets/ucla-contextual/contextual_test) and will apply static quantization on the vision encoder while weight-only quantization will be applied on the rest of the model. Experiments show that applying static quantization on the vision encoder provides a noticeable performance improvement without significant accuracy degradation. Since the vision encoder is called only once per generation, the overall performance gain from applying static quantization on this component is lower than the gain achieved by optimizing more frequently used components like the language model. Nevertheless, this approach can be beneficial in certain scenarios. For example, when short answers are needed, especially with multiple images as input.

```python
from optimum.intel import OVModelForVisualCausalLM, OVPipelineQuantizationConfig, OVQuantizationConfig, OVWeightQuantizationConfig

q_config = OVPipelineQuantizationConfig(
quantization_configs={
"lm_model": OVWeightQuantizationConfig(bits=8),
"text_embeddings_model": OVWeightQuantizationConfig(bits=8),
"vision_embeddings_model": OVQuantizationConfig(bits=8),
},
dataset=dataset,
num_samples=num_samples,
)
q_model = OVModelForVisualCausalLM.from_pretrained(model_id, quantization_config=q_config)
q_model.save_pretrained("smolvlm_static_int8")
```

Quantizing activations adds small errors that can build up and affect accuracy, so careful testing afterward is important. More information and examples can be found in [our documentation](https://huggingface.co/docs/optimum-intel/en/openvino/optimization#pipeline-quantization).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also using a dataset as close to our task as possible, right?


### Step 3: Run inference

You can now run inference with your quantized model:

```python
generated_ids = q_model.generate(**inputs, max_new_tokens=100)
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_texts[0])
```

If you have a recent Intel laptop, Intel AI PC, or Intel discrete GPU, you can load the model on GPU by adding `device="gpu"` when loading your model:

```python
model = OVModelForVisualCausalLM.from_pretrained(model_id, device="gpu")
```

We also created a [space](https://huggingface.co/spaces/echarlaix/vision-langage-openvino) so you can play with the [original model](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino) and its quantized variants obtained by respectively applying [weight-only quantization](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino-8bit-woq-data-free) and [mixed quantization](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino-8bit-mixed). This demo runs on 4th Generation Intel Xeon (Sapphire Rapids) processors.


<p align="center">
<img src="https://huggingface.co/datasets/OpenVINO/documentation/resolve/main/blog/openvino_vlm/chat1.png" alt=" HF Space" width="500"/>
</p>

To reproduce our results, check out our [notebook](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb).

## Evaluation and Conclusion

We ran a benchmark to compare the performance of the [PyTorch](https://huggingface.co/HuggingFaceTB/SmolVLM2-256M-Video-Instruct), [OpenVINO](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino), and [OpenVINO 8-bit WOQ](https://huggingface.co/echarlaix/SmolVLM2-256M-Video-Instruct-openvino-8bit-woq-data-free) versions of the original model. The goal was to evaluate the impact of weight-only quantization on latency and throughput on Intel CPU hardware. For this test, we used [a single image](https://huggingface.co/datasets/OpenVINO/documentation/resolve/main/blog/openvino_vlm/flower.png) as input.

We measured the following metrics to evaluate the model's performance:
- Time To First Token (TTFT) : Time it takes to generate the first output token.
- Time Per Output Token (TPOT): Time it takes to generate each subsequent output tokens.
- End-to-End Latency : Total time it takes to generate the output all output tokens.
- Decoding Throughput: Number of tokens per second the model generates during the decoding phase.

Here are the results on Intel CPU:
Copy link
Contributor Author

@echarlaix echarlaix Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO : add details on intel cpu used for the benchmark cc @ezelanza (+ let's also add conclusion once benchmark validated)


| Configuration |Time To First Token (TTFT)|Time Per Output Token (TPOT)| End-to-End Latency | Decoding Throughput |
|------------------|--------------------------|----------------------------|-----------------------|-------------------------------|
| pytorch | 5.150 | 1.385 | 25.927 | 0.722 |
| openvino | 0.420 | 0.021 | 0.738 | 47.237 |
| openvino-8bit-woq| 0.247 | 0.016 | 0.482 | 63.928 |


This benchmark demonstrates how small, optimized multimodal models, like [SmolVLM2-256M](https://huggingface.co/HuggingFaceTB/SmolVLM2-256M-Video-Instruct), perform on Intel CPUs across different configurations. According to the tests, the PyTorch version shows high latency, with a time to first token (TTFT) of over 5s with a decoding throughput of ~0.7 tokens/s. Simply converting the model with Optimum and running it on OpenVINO drastically reduces the time to first token (TTFT) to 0.42s (~x12 speedup) and raises throughput to ~47 tokens/s (~x65). Applying 8-bit weight-only quantization further reduces TTFT (x1.7) and increases throughput (x1.4), while also reducing model size and improving efficiency.

> [!NOTE]
> **Platform configuration**
> Platform Configuration for performance claims above:
>
> **System Board:** MSI B860M GAMING PLUS WIFI (MS-7E42)
> **CPU:** Intel® Core™ Ultra 7 265K
> **Sockets/Physical Cores:** 1/20 (20 threads)
> **HyperThreading/Turbo Settings:** Disabled
> **Memory:** 64 GB DDR5 @ 6400 MHz
> **TDP:** 665W
> **BIOS:** American Megatrends International, LLC. 2.A10
> **BIOS Release Date:** 28.11.2024
> **OS:** Ubuntu 24.10
> **Kernel:** 6.11.0–25-generic
> **OpenVINO Version:** 2025.2.0
> **torch:** 2.8.0
> **torchvision:** 0.23.0+cpu
> **optimum-intel:** 1.25.2
> **transformers:** 4.53.3
> **Benchmark Date:** 15.05.2025
> **Benchmarked by:** Intel Corporation
> Performance may vary by use, configuration, and other factors. See the platform configuration below.


## Useful Links & Resources

- [Notebook](https://github.com/huggingface/optimum-intel/blob/main/notebooks/openvino/vision_language_quantization.ipynb)
- [Try our Space](https://huggingface.co/spaces/echarlaix/vision-langage-openvino)
- [Watch the webinar recording](https://web.cvent.com/event/d550a2a7-04f2-4a28-b641-3af228e318ca/regProcessStep1?utm_campaign=speakers4&utm_medium=organic&utm_source=Community)
- [Optimum Intel Documentation](https://huggingface.co/docs/optimum-intel/en/openvino/inference)


> [!NOTE]
> ## Notices & Disclaimers
> Performance varies by use, configuration, and other factors. Learn more on the Performance Index site.
> Performance results are based on testing as of dates shown in configurations and may not reflect all publicly available updates. See backup for configuration details. No product or component can be absolutely secure. Your costs and results may vary. Intel technologies may require enabled hardware, software or service activation.
> © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.