Update to MiniCPM-o 2.6

This commit is contained in:
yiranyyu
2025-01-14 15:33:44 +08:00
parent b75a362dd6
commit 53c0174797
123 changed files with 16848 additions and 2952 deletions

View File

@@ -0,0 +1,333 @@
## MiniCPM-Llama3-V 2.5
> Archieve at: 2025-01-13
**MiniCPM-Llama3-V 2.5** is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.0. Notable features of MiniCPM-Llama3-V 2.5 include:
- 🔥 **Leading Performance.**
MiniCPM-Llama3-V 2.5 has achieved an average score of 65.1 on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4V-1106, Gemini Pro, Claude 3 and Qwen-VL-Max** and greatly outperforms other Llama 3-based MLLMs.
- 💪 **Strong OCR Capabilities.**
MiniCPM-Llama3-V 2.5 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), achieving a **700+ score on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V-0409, Qwen-VL-Max and Gemini Pro**. Based on recent user feedback, MiniCPM-Llama3-V 2.5 has now enhanced full-text OCR extraction, table-to-markdown conversion, and other high-utility capabilities, and has further strengthened its instruction-following and complex reasoning abilities, enhancing multimodal interaction experiences.
- 🏆 **Trustworthy Behavior.**
Leveraging the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) method (the newest technique in the [RLHF-V](https://github.com/RLHF-V) [CVPR'24] series), MiniCPM-Llama3-V 2.5 exhibits more trustworthy behavior. It achieves a **10.3%** hallucination rate on Object HalBench, lower than GPT-4V-1106 (13.6%), achieving the best-level performance within the open-source community. [Data released](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset).
- 🌏 **Multilingual Support.**
Thanks to the strong multilingual capabilities of Llama 3 and the cross-lingual generalization technique from [VisCPM](https://github.com/OpenBMB/VisCPM), MiniCPM-Llama3-V 2.5 extends its bilingual (Chinese-English) multimodal capabilities to **over 30 languages including German, French, Spanish, Italian, Korean etc.** [All Supported Languages](./assets/minicpm-llama-v-2-5_languages.md).
- 🚀 **Efficient Deployment.**
MiniCPM-Llama3-V 2.5 systematically employs **model quantization, CPU optimizations, NPU optimizations and compilation optimizations**, achieving high-efficiency deployment on end-side devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a **150x acceleration in end-side MLLM image encoding** and a **3x speedup in language decoding**.
- 💫 **Easy Usage.**
MiniCPM-Llama3-V 2.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) and [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5) support for efficient CPU inference on local devices, (2) [GGUF](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) format quantized models in 16 sizes, (3) efficient [LoRA](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning) fine-tuning with only 2 V100 GPUs, (4) [streaming output](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage), (5) quick local WebUI demo setup with [Gradio](https://github.com/OpenBMB/MiniCPM-V/blob/main/web_demo_2.5.py) and [Streamlit](https://github.com/OpenBMB/MiniCPM-V/blob/main/web_demo_streamlit-2_5.py), and (6) interactive demos on [HuggingFace Spaces](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5).
### Evaluation <!-- omit in toc -->
<div align="center">
<img src=../assets/MiniCPM-Llama3-V-2.5-peformance.png width=66% />
</div>
<details>
<summary>Click to view results on TextVQA, DocVQA, OCRBench, OpenCompass, MME, MMBench, MMMU, MathVista, LLaVA Bench, RealWorld QA, Object HalBench. </summary>
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>OCRBench</th>
<th>TextVQA val</th>
<th>DocVQA test</th>
<th>Open-Compass</th>
<th>MME</th>
<th>MMB test (en)</th>
<th>MMB test (cn)</th>
<th>MMMU val</th>
<th>Math-Vista</th>
<th>LLaVA Bench</th>
<th>RealWorld QA</th>
<th>Object HalBench</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="14" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Gemini Pro</td>
<td>-</td>
<td>680</td>
<td>74.6</td>
<td>88.1</td>
<td>62.9</td>
<td>2148.9</td>
<td>73.6</td>
<td>74.3</td>
<td>48.9</td>
<td>45.8</td>
<td>79.9</td>
<td>60.4</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4V (2023.11.06)</td>
<td>-</td>
<td>645</td>
<td>78.0</td>
<td>88.4</td>
<td>63.5</td>
<td>1771.5</td>
<td>77.0</td>
<td>74.4</td>
<td>53.8</td>
<td>47.8</td>
<td>93.1</td>
<td>63.0</td>
<td>86.4</td>
</tr>
<tr>
<td colspan="14" align="left"><strong>Open-source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Mini-Gemini</td>
<td>2.2B</td>
<td>-</td>
<td>56.2</td>
<td>34.2*</td>
<td>-</td>
<td>1653.0</td>
<td>-</td>
<td>-</td>
<td>31.7</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen-VL-Chat</td>
<td>9.6B</td>
<td>488</td>
<td>61.5</td>
<td>62.6</td>
<td>51.6</td>
<td>1860.0</td>
<td>61.8</td>
<td>56.3</td>
<td>37.0</td>
<td>33.8</td>
<td>67.7</td>
<td>49.3</td>
<td>56.2</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">DeepSeek-VL-7B</td>
<td>7.3B</td>
<td>435</td>
<td>64.7*</td>
<td>47.0*</td>
<td>54.6</td>
<td>1765.4</td>
<td>73.8</td>
<td>71.4</td>
<td>38.3</td>
<td>36.8</td>
<td>77.8</td>
<td>54.2</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Yi-VL-34B</td>
<td>34B</td>
<td>290</td>
<td>43.4*</td>
<td>16.9*</td>
<td>52.2</td>
<td><strong>2050.2</strong></td>
<td>72.4</td>
<td>70.7</td>
<td>45.1</td>
<td>30.7</td>
<td>62.3</td>
<td>54.8</td>
<td>79.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">CogVLM-Chat</td>
<td>17.4B</td>
<td>590</td>
<td>70.4</td>
<td>33.3*</td>
<td>54.2</td>
<td>1736.6</td>
<td>65.8</td>
<td>55.9</td>
<td>37.3</td>
<td>34.7</td>
<td>73.9</td>
<td>60.3</td>
<td>73.6</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">TextMonkey</td>
<td>9.7B</td>
<td>558</td>
<td>64.3</td>
<td>66.7</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Idefics2</td>
<td>8.0B</td>
<td>-</td>
<td>73.0</td>
<td>74.0</td>
<td>57.2</td>
<td>1847.6</td>
<td>75.7</td>
<td>68.6</td>
<td>45.2</td>
<td>52.2</td>
<td>49.1</td>
<td>60.7</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Bunny-LLama-3-8B</td>
<td>8.4B</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>54.3</td>
<td>1920.3</td>
<td>77.0</td>
<td>73.9</td>
<td>41.3</td>
<td>31.5</td>
<td>61.2</td>
<td>58.8</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-NeXT Llama-3-8B</td>
<td>8.4B</td>
<td>-</td>
<td>-</td>
<td>78.2</td>
<td>-</td>
<td>1971.5</td>
<td>-</td>
<td>-</td>
<td>41.7</td>
<td>37.5</td>
<td>80.1</td>
<td>60.0</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Phi-3-vision-128k-instruct</td>
<td>4.2B</td>
<td>639*</td>
<td>70.9</td>
<td>-</td>
<td>-</td>
<td>1537.5*</td>
<td>-</td>
<td>-</td>
<td>40.4</td>
<td>44.5</td>
<td>64.2*</td>
<td>58.8*</td>
<td>-</td>
</tr>
<tr style="background-color: #e6f2ff;">
<td nowrap="nowrap" align="left">MiniCPM-V 1.0</td>
<td>2.8B</td>
<td>366</td>
<td>60.6</td>
<td>38.2</td>
<td>47.5</td>
<td>1650.2</td>
<td>64.1</td>
<td>62.6</td>
<td>38.3</td>
<td>28.9</td>
<td>51.3</td>
<td>51.2</td>
<td>78.4</td>
</tr>
<tr style="background-color: #e6f2ff;">
<td nowrap="nowrap" align="left">MiniCPM-V 2.0</td>
<td>2.8B</td>
<td>605</td>
<td>74.1</td>
<td>71.9</td>
<td>54.5</td>
<td>1808.6</td>
<td>69.1</td>
<td>66.5</td>
<td>38.2</td>
<td>38.7</td>
<td>69.2</td>
<td>55.8</td>
<td>85.5</td>
</tr>
<tr style="background-color: #e6f2ff;">
<td nowrap="nowrap" align="left">MiniCPM-Llama3-V 2.5</td>
<td>8.5B</td>
<td><strong>725</strong></td>
<td><strong>76.6</strong></td>
<td><strong>84.8</strong></td>
<td><strong>65.1</strong></td>
<td>2024.6</td>
<td><strong>77.2</strong></td>
<td><strong>74.2</strong></td>
<td><strong>45.8</strong></td>
<td><strong>54.3</strong></td>
<td><strong>86.7</strong></td>
<td><strong>63.5</strong></td>
<td><strong>89.7</strong></td>
</tr>
</tbody>
</table>
</div>
* We evaluate the officially released checkpoint by ourselves.
</details>
<div align="center">
<img src="../assets/llavabench_compare_3.png" width="100%" />
<br>
Evaluation results of multilingual LLaVA Bench
</div>
### Examples <!-- omit in toc -->
<table align="center" >
<p align="center" >
<img src="../assets/minicpmv-llama3-v2.5/cases_all.png" />
</p>
</table>
</details>
### Model Zoo
| Model | Device | Memory | &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; Description | Download |
|:-----------|:--:|:-----------:|:-------------------|:---------------:|
| MiniCPM-Llama3-V 2.5 | GPU | 19 GB | Strong end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) &nbsp;&nbsp; [<img src="../assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
| MiniCPM-Llama3-V 2.5 gguf | CPU | 6 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) &nbsp;&nbsp;[<img src="../assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) |
| MiniCPM-Llama3-V 2.5 int4 | GPU | 8 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) &nbsp;&nbsp; [<img src="../assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |

214
docs/minicpm_v1.md Normal file
View File

@@ -0,0 +1,214 @@
## MiniCPM-V 1.0
> Archive at2024-05-19
MiniCPM-V 1.0 is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of MiniCPM-V 1.0 include:
- ⚡️ **High Efficiency.**
MiniCPM-V 1.0 can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows MiniCPM-V 1.0 to operate with **much less memory cost and higher speed during inference**.
- 🔥 **Promising Performance.**
MiniCPM-V 1.0 achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**.
- 🙌 **Bilingual Support.**
MiniCPM-V 1.0 is **the first end-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from the ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038).
### Evaluation
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th nowrap="nowrap" >Visual Tokens</th>
<th>MME</th>
<th nowrap="nowrap" >MMB dev (en)</th>
<th nowrap="nowrap" >MMB dev (zh)</th>
<th nowrap="nowrap" >MMMU val</th>
<th nowrap="nowrap" >CMMMU val</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td align="left">LLaVA-Phi</td>
<td align="right">3B</td>
<td>576</td>
<td>1335</td>
<td>59.8</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MobileVLM</td>
<td align="right">3B</td>
<td>144</td>
<td>1289</td>
<td>59.6</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Imp-v1</td>
<td align="right">3B</td>
<td>576</td>
<td>1434</td>
<td>66.5</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen-VL-Chat</td>
<td align="right" >9.6B</td>
<td>256</td>
<td>1487</td>
<td>60.6 </td>
<td>56.7 </td>
<td>35.9 </td>
<td>30.7 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >CogVLM</td>
<td align="right">17.4B </td>
<td>1225</td>
<td>1438 </td>
<td>63.7 </td>
<td>53.8 </td>
<td>32.1 </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>MiniCPM-V 1.0</b></td>
<td align="right">3B </td>
<td>64</td>
<td>1452 </td>
<td>67.9 </td>
<td>65.3 </td>
<td>37.2 </td>
<td>32.1 </td>
</tr>
</tbody>
</table>
</div>
### Examples
We deploy MiniCPM-V 1.0 on end devices. The demo video is the raw screen recording on a OnePlus 9R without edition.
<table align="center">
<p align="center">
<img src="assets/gif_cases/蛇_cn.gif" width=36%/>
<img src="assets/gif_cases/Mushroom_en.gif" width=36%/>
</p>
</table>
## Install
1. Clone this repository and navigate to the source folder
```bash
git clone https://github.com/OpenBMB/OmniLMM.git
cd OmniLMM
```
2. Create conda environment
```Shell
conda create -n OmniLMM python=3.10 -y
conda activate OmniLMM
```
3. Install dependencies
```shell
pip install -r requirements.txt
```
## Inference
### Model Zoo
| Model | Description | Download Link |
|:----------------------|:-------------------|:---------------:|
| MiniCPM-V 1.0 | The efficient version for end device deployment. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V/files) |
### Multi-turn Conversation
Please refer to the following codes to run `MiniCPM-V 1.0`.
<div align="center">
<img src="assets/worldmap_ck.jpg" width="500px">
</div>
```python
from chat import OmniLMMChat, img2base64
chat_model = OmniLMMChat('openbmb/MiniCPM-V')
im_64 = img2base64('./assets/worldmap_ck.jpg')
# First round chat
msgs = [{"role": "user", "content": "What is interesting about this image?"}]
inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.chat(inputs)
print(answer)
# Second round chat
# pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": answer})
msgs.append({"role": "user", "content": "Where is China in the image"})
inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.chat(inputs)
print(answer)
```
### Inference on Mac
<details>
<summary>Click to view example, MiniCPM-V 1.0 can run on Mac with MPS (Apple silicon or AMD GPUs). </summary>
```python
# test.py
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True, torch_dtype=torch.bfloat16)
model = model.to(device='mps', dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True)
model.eval()
image = Image.open('./assets/worldmap_ck.jpg').convert('RGB')
question = 'What is interesting about this image?'
msgs = [{'role': 'user', 'content': question}]
answer, context, _ = model.chat(
image=image,
msgs=msgs,
context=None,
tokenizer=tokenizer,
sampling=True
)
print(answer)
```
Run with command:
```shell
PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
```
</details>
### Deployment on Mobile Phone
Currently MiniCPM-V 1.0 can be deployed on mobile phones with Android and Harmony operating systems. 🚀 Try it out [here](https://github.com/OpenBMB/mlc-MiniCPM).

294
docs/minicpm_v2.md Normal file
View File

@@ -0,0 +1,294 @@
## MiniCPM-V 2.0
> Archive at2025-01-13
**MiniCPM-V 2.0** is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Our latest version, MiniCPM-V 2.0 has several notable features.
- 🔥 **State-of-the-art Performance.**
MiniCPM-V 2.0 achieves **state-of-the-art performance** on multiple benchmarks (including OCRBench, TextVQA, MME, MMB, MathVista, etc) among models under 7B parameters. It even **outperforms strong Qwen-VL-Chat 9.6B, CogVLM-Chat 17.4B, and Yi-VL 34B on OpenCompass, a comprehensive evaluation over 11 popular benchmarks**. Notably, MiniCPM-V 2.0 shows **strong OCR capability**, achieving **comparable performance to Gemini Pro in scene-text understanding**, and **state-of-the-art performance on OCRBench** among open-source models.
- 🏆 **Trustworthy Behavior.**
LMMs are known for suffering from hallucination, often generating text not factually grounded in images. MiniCPM-V 2.0 is **the first end-side LMM aligned via multimodal RLHF for trustworthy behavior** (using the recent [RLHF-V](https://rlhf-v.github.io/) [CVPR'24] series technique). This allows the model to **match GPT-4V in preventing hallucinations** on Object HalBench.
- 🌟 **High-Resolution Images at Any Aspect Raito.**
MiniCPM-V 2.0 can accept **1.8 million pixels (e.g., 1344x1344) images at any aspect ratio**. This enables better perception of fine-grained visual information such as small objects and optical characters, which is achieved via a recent technique from [LLaVA-UHD](https://arxiv.org/pdf/2403.11703.pdf).
- ⚡️ **High Efficiency.**
MiniCPM-V 2.0 can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. For visual encoding, we compress the image representations into much fewer tokens via a perceiver resampler. This allows MiniCPM-V 2.0 to operate with **favorable memory cost and speed during inference even when dealing with high-resolution images**.
- 🙌 **Bilingual Support.**
MiniCPM-V 2.0 **supports strong bilingual multimodal capabilities in both English and Chinese**. This is enabled by generalizing multimodal capabilities across languages, a technique from [VisCPM](https://arxiv.org/abs/2308.12038) [ICLR'24].
### Evaluation <!-- omit in toc -->
<div align="center">
<img src=../assets/minicpmv-2-peformance.png width=66% />
</div>
<details>
<summary>Click to view results on TextVQA, DocVQA, OCRBench, OpenCompass, MME, MMBench, MMMU, MathVista, LLaVA Bench, Object HalBench. </summary>
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>TextVQA val</th>
<th>DocVQA test</th>
<th>OCRBench</th>
<th>OpenCompass</th>
<th nowrap="nowrap" >MME</th>
<th>MMB dev(en)</th>
<th>MMB dev(zh)</th>
<th>MMMU val</th>
<th>MathVista</th>
<th>LLaVA Bench</th>
<th nowrap="nowrap">Object HalBench</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="12" align="left"><strong>Proprietary models</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Gemini Pro Vision</td>
<td>- </td>
<td>74.6</td>
<td>88.1</td>
<td>680</td>
<td>63.8</td>
<td>2148.9</td>
<td>75.2</td>
<td>74.0</td>
<td>48.9</td>
<td>45.8</td>
<td>79.9</td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4V</td>
<td>- </td>
<td>78.0</td>
<td>88.4</td>
<td>645</td>
<td>63.2</td>
<td>1771.5</td>
<td>75.1</td>
<td>75.0</td>
<td>53.8</td>
<td>47.8</td>
<td>93.1</td>
<td>86.4 / 92.7</td>
</tr>
<tr>
<td colspan="12" align="left"><strong>Open-source models 6B~34B</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Yi-VL-6B</td>
<td align="right" >6.7B</td>
<td>45.5*</td>
<td>17.1*</td>
<td>290</td>
<td>49.3</td>
<td>1915.1 </td>
<td>68.6 </td>
<td>68.3 </td>
<td>40.3 </td>
<td>28.8 </td>
<td>51.9 </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen-VL-Chat</td>
<td align="right" >9.6B</td>
<td>61.5</td>
<td>62.6</td>
<td>488 </td>
<td>52.1 </td>
<td>1860.0 </td>
<td>60.6 </td>
<td>56.7 </td>
<td>37.0 </td>
<td>33.8 </td>
<td>67.7 </td>
<td>56.2 / 80.0</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Yi-VL-34B</td>
<td align="right" >34B</td>
<td>43.4*</td>
<td>16.9*</td>
<td>290</td>
<td>52.6 </td>
<td>2050.2</td>
<td>71.1</td>
<td>71.4</td>
<td>45.1</td>
<td>30.7</td>
<td>62.3</td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >DeepSeek-VL-7B</td>
<td align="right" >7.3B</td>
<td>64.7*</td>
<td>47.0* </td>
<td>435</td>
<td>55.6 </td>
<td>1765.4 </td>
<td>74.1 </td>
<td>72.8 </td>
<td>38.3 </td>
<td>36.8</td>
<td>77.8 </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >TextMonkey</td>
<td align="right" >9.7B</td>
<td>64.3</td>
<td>66.7 </td>
<td>558</td>
<td>- </td>
<td>- </td>
<td>- </td>
<td>- </td>
<td>- </td>
<td>-</td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >CogVLM-Chat</td>
<td align="right" >17.4B</td>
<td>70.4</td>
<td>33.3*</td>
<td>590 </td>
<td>52.5 </td>
<td>1736.6 </td>
<td>63.7 </td>
<td>53.8 </td>
<td>37.3 </td>
<td>34.7 </td>
<td>73.9 </td>
<td>73.6 / 87.4 </td>
</tr>
<tr>
<td colspan="12" align="left"><strong>Open-source models 1B~3B </strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >DeepSeek-VL-1.3B</td>
<td align="right" >1.7B</td>
<td>58.4*</td>
<td>37.9*</td>
<td>413</td>
<td>46.0 </td>
<td>1531.6 </td>
<td>64.0 </td>
<td>61.2 </td>
<td>33.8 </td>
<td>29.4 </td>
<td>51.1 </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >MobileVLM V2</td>
<td align="right" >3.1B</td>
<td>57.5</td>
<td>19.4*</td>
<td>-</td>
<td>-</td>
<td>1440.5(P) </td>
<td>63.2 </td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Mini-Gemini</td>
<td align="right" >2.2B</td>
<td>56.2</td>
<td>34.2*</td>
<td>-</td>
<td>-</td>
<td>1653.0 </td>
<td>59.8 </td>
<td>- </td>
<td>31.7 </td>
<td>-</td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >MiniCPM-V</td>
<td align="right" >2.8B </td>
<td>60.6</td>
<td>38.2 </td>
<td>366</td>
<td>47.6</td>
<td>1650.2 </td>
<td>67.9 </td>
<td>65.3 </td>
<td><strong>38.3</strong></td>
<td>28.9</td>
<td>51.3 </td>
<td>78.4 / 88.5 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><strong>MiniCPM-V 2.0</strong></td>
<td align="right" >2.8B </td>
<td><strong>74.1</strong></td>
<td><strong>71.9</strong> </td>
<td><strong>605</strong></td>
<td><strong>55.0</strong></td>
<td><strong>1808.6</strong> </td>
<td><strong>69.6</strong> </td>
<td><strong>68.1</strong> </td>
<td>38.2 </td>
<td><strong>38.7</strong></td>
<td><strong>69.2</strong> </td>
<td><strong>85.5 / 92.2 </strong></td>
</tr>
</tbody>
</table>
</div>
* We evaluate the officially released checkpoint by ourselves.
</details>
### Examples <!-- omit in toc -->
<table align="center">
<p align="center">
<img src="../assets/minicpmv2-cases_2.png" width=95%/>
</p>
</table>
We deploy MiniCPM-V 2.0 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro without edition.
<table align="center">
<p align="center">
<img src="../assets/gif_cases/station.gif" width=36%/>
<img src="../assets/gif_cases/london_car.gif" width=36%/>
</p>
</table>
### Model Zoo
| Model | Device | Memory | &emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp;&emsp; Description | Download |
|:-----------|:--:|:-----------:|:-------------------|:---------------:|
| MiniCPM-V 2.0 | GPU | 8 GB | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) &nbsp;&nbsp; [<img src="../assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
| MiniCPM-V 1.0 | GPU | 7 GB | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) &nbsp;&nbsp; [<img src="../assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |

945
docs/minicpm_v2dot6.md Normal file
View File

@@ -0,0 +1,945 @@
## MiniCPM-V 2.6
> Archieve at: 2025-01-13
**MiniCPM-V 2.6** is the latest and most capable model in the MiniCPM-V series. The model is built on SigLip-400M and Qwen2-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-Llama3-V 2.5, and introduces new features for multi-image and video understanding. Notable features of MiniCPM-V 2.6 include:
- 🔥 **Leading Performance.**
MiniCPM-V 2.6 achieves an average score of 65.2 on the latest version of OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o mini, GPT-4V, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding.
- 🖼️ **Multi Image Understanding and In-context Learning.** MiniCPM-V 2.6 can also perform **conversation and reasoning over multiple images**. It achieves **state-of-the-art performance** on popular multi-image benchmarks such as Mantis-Eval, BLINK, Mathverse mv and Sciverse mv, and also shows promising in-context learning capability.
- 🎬 **Video Understanding.** MiniCPM-V 2.6 can also **accept video inputs**, performing conversation and providing dense captions for spatial-temporal information. It outperforms **GPT-4V, Claude 3.5 Sonnet and LLaVA-NeXT-Video-34B** on Video-MME with/without subtitles.
- 💪 **Strong OCR Capability and Others.**
MiniCPM-V 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V, and Gemini 1.5 Pro**.
Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, with significantly lower hallucination rates than GPT-4o and GPT-4V on Object HalBench, and supports **multilingual capabilities** on English, Chinese, German, French, Italian, Korean, etc.
- 🚀 **Superior Efficiency.**
In addition to its friendly size, MiniCPM-V 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-V 2.6 can efficiently support **real-time video understanding** on end-side devices such as iPad.
- 💫 **Easy Usage.**
MiniCPM-V 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpmv-main/examples/llava/README-minicpmv2.6.md) and [ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#inference-with-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks, (5) quick local WebUI demo setup with [Gradio](#chat-with-our-demo-on-gradio), and (6) online web [demo](http://120.92.209.146:8887/).
### Evaluation <!-- omit in toc -->
<div align="center">
<img src=../assets/radar_final.png width=66% />
</div>
<details>
<summary>Click to view single image results on OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench. </summary>
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>Token Density<sup>+</sup></th>
<th>OpenCompass</th>
<th>MME</th>
<th>MMVet</th>
<th>OCRBench</th>
<th>MMMU val</th>
<th>MathVista mini</th>
<th>MMB1.1 test</th>
<th>AI2D</th>
<th>TextVQA val</th>
<th>DocVQA test</th>
<th>HallusionBench</th>
<th>Object HalBench</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="15" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4o</td>
<td>-</td>
<td>1088</td>
<td>69.9</td>
<td>2328.7</td>
<td>69.1</td>
<td>736</td>
<td>69.2</td>
<td>61.3</td>
<td>82.2</td>
<td>84.6</td>
<td>-</td>
<td>92.8</td>
<td>55.0</td>
<td>17.6</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Claude 3.5 Sonnet</td>
<td>-</td>
<td>750</td>
<td>67.9</td>
<td>1920.0</td>
<td>66.0</td>
<td>788</td>
<td>65.9</td>
<td>61.6</td>
<td>78.5</td>
<td>80.2</td>
<td>-</td>
<td>95.2</td>
<td>49.9</td>
<td>13.8</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
<td>-</td>
<td>-</td>
<td>64.4</td>
<td>2110.6</td>
<td>64.0</td>
<td>754</td>
<td>60.6</td>
<td>57.7</td>
<td>73.9</td>
<td>79.1</td>
<td>73.5</td>
<td>86.5</td>
<td>45.6</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4o mini</td>
<td>-</td>
<td>1088</td>
<td>64.1</td>
<td>2003.4</td>
<td>66.9</td>
<td>785</td>
<td>60.0</td>
<td>52.4</td>
<td>76.0</td>
<td>77.8</td>
<td>-</td>
<td>-</td>
<td>46.1</td>
<td>12.4</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4V</td>
<td>-</td>
<td>1088</td>
<td>63.5</td>
<td>2070.2</td>
<td>67.5</td>
<td>656</td>
<td>61.7</td>
<td>54.7</td>
<td>79.8</td>
<td>78.6</td>
<td>78.0</td>
<td>87.2</td>
<td>43.9</td>
<td>14.2</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Step-1V</td>
<td>-</td>
<td>-</td>
<td>59.5</td>
<td>2206.4</td>
<td>63.3</td>
<td>625</td>
<td>49.9</td>
<td>44.8</td>
<td>78.0</td>
<td>79.2</td>
<td>71.6</td>
<td>-</td>
<td>48.4</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen-VL-Max</td>
<td>-</td>
<td>784</td>
<td>58.3</td>
<td>2281.7</td>
<td>61.8</td>
<td>684</td>
<td>52.0</td>
<td>43.4</td>
<td>74.6</td>
<td>75.7</td>
<td>79.5</td>
<td>93.1</td>
<td>41.2</td>
<td>13.4</td>
</tr>
<tr>
<td colspan="15" align="left"><strong>Open-source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-NeXT-Yi-34B</td>
<td>34B</td>
<td>157</td>
<td>55.0</td>
<td>2006.5</td>
<td>50.7</td>
<td>574</td>
<td>48.8</td>
<td>40.4</td>
<td>77.8</td>
<td>78.9</td>
<td>69.3</td>
<td>-</td>
<td>34.8</td>
<td>12.6</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Mini-Gemini-HD-34B</td>
<td>34B</td>
<td>157</td>
<td>-</td>
<td>2141.0</td>
<td>59.3</td>
<td>518</td>
<td>48.0</td>
<td>43.3</td>
<td>-</td>
<td>80.5</td>
<td>74.1</td>
<td>78.9</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Cambrian-34B</td>
<td>34B</td>
<td>1820</td>
<td>58.3</td>
<td>2049.9</td>
<td>53.2</td>
<td>591</td>
<td>50.4</td>
<td>50.3</td>
<td>77.8</td>
<td>79.5</td>
<td>76.7</td>
<td>75.5</td>
<td>41.6</td>
<td>14.7</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GLM-4V-9B</td>
<td>13B</td>
<td>784</td>
<td>59.1</td>
<td>2018.8</td>
<td>58.0</td>
<td>776</td>
<td>46.9</td>
<td>51.1</td>
<td>67.9</td>
<td>71.2</td>
<td>-</td>
<td>-</td>
<td>45.0</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternVL2-8B</td>
<td>8B</td>
<td>706</td>
<td>64.1</td>
<td>2215.1</td>
<td>54.3</td>
<td>794</td>
<td><strong>51.2</strong></td>
<td>58.3</td>
<td><strong>79.4</strong></td>
<td><strong>83.6</strong></td>
<td>77.4</td>
<td><strong>91.6</strong></td>
<td>45.0</td>
<td>21.3</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MiniCPM-Llama-V 2.5</td>
<td>8B</td>
<td>1882</td>
<td>58.8</td>
<td>2024.6</td>
<td>52.8</td>
<td>725</td>
<td>45.8</td>
<td>54.3</td>
<td>72.0</td>
<td>78.4</td>
<td>76.6</td>
<td>84.8</td>
<td>42.4</td>
<td>10.3</td>
</tr>
<tr style="background-color: #e6f2ff;">
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
<td>8B</td>
<td><strong>2822</strong></td>
<td><strong>65.2</strong></td>
<td><strong>2348.4</strong>*</td>
<td><strong>60.0</strong></td>
<td><strong>852</strong>*</td>
<td>49.8*</td>
<td><strong>60.6</strong></td>
<td>78.0</td>
<td>82.1</td>
<td><strong>80.1<strong></td>
<td>90.8</td>
<td><strong>48.1</strong>*</td>
<td><strong>8.2</strong></td>
</tr>
</tbody>
</table>
</div>
* We evaluate this benchmark using chain-of-thought prompting. Specifically, for MME, we used this technique only for the Cognition set.
<sup>+</sup> Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens.
Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation.
</details>
<details>
<summary>Click to view multi-image results on Mantis Eval, BLINK, Mathverse mv, Sciverse mv, MIRB.</summary>
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>Mantis Eval</th>
<th>BLINK val</th>
<th>Mathverse mv</th>
<th>Sciverse mv</th>
<th>MIRB</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="7" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4V</td>
<td>-</td>
<td>62.7</td>
<td>54.6</td>
<td>60.3</td>
<td>66.9</td>
<td>53.1</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-NeXT-Interleave-14B</td>
<td>14B</td>
<td>66.4</td>
<td>52.6</td>
<td>32.7</td>
<td>30.2</td>
<td>-</td>
</tr>
<tr>
<td colspan="7" align="left"><strong>Open-source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Emu2-Chat</td>
<td>37B</td>
<td>37.8</td>
<td>36.2</td>
<td>-</td>
<td>27.2</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">CogVLM</td>
<td>17B</td>
<td>45.2</td>
<td>41.1</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">VPG-C</td>
<td>7B</td>
<td>52.4</td>
<td>43.1</td>
<td>24.3</td>
<td>23.1</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">VILA 8B</td>
<td>8B</td>
<td>51.2</td>
<td>39.3</td>
<td>-</td>
<td>36.5</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternLM-XComposer-2.5</td>
<td>8B</td>
<td>53.1*</td>
<td>48.9</td>
<td>32.1*</td>
<td>-</td>
<td>42.5</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternVL2-8B</td>
<td>8B</td>
<td>59.0*</td>
<td>50.9</td>
<td>30.5*</td>
<td>34.4*</td>
<td><strong>56.9*</strong></td>
</tr>
<tr style="background-color: #e6f2ff;">
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
<td>8B</td>
<td><strong>69.1</strong></td>
<td><strong>53.0</strong></td>
<td><strong>84.9</strong></td>
<td><strong>74.9</strong></td>
<td>53.8</td>
</tr>
</tbody>
</table>
</div>
* We evaluate the officially released checkpoint by ourselves.
</details>
<details>
<summary>Click to view video results on Video-MME and Video-ChatGPT.</summary>
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th colspan="2">Video-MME</th>
<th colspan="5">Video-ChatGPT</th>
</tr>
<tr>
<th align="left"></th>
<th></th>
<th>w/o subs</th>
<th>w subs</th>
<th>Correctness</th>
<th>Detail</th>
<th>Context</th>
<th>Temporal</th>
<th>Consistency</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="9" align="left"><strong>Proprietary</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Claude 3.5 Sonnet</td>
<td>-</td>
<td>60.0</td>
<td>62.9</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4V</td>
<td>-</td>
<td>59.9</td>
<td>63.3</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td colspan="9" align="left"><strong>Open-source</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-NeXT-7B</td>
<td>7B</td>
<td>-</td>
<td>-</td>
<td>3.39</td>
<td>3.29</td>
<td>3.92</td>
<td>2.60</td>
<td>3.12</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-NeXT-34B</td>
<td>34B</td>
<td>-</td>
<td>-</td>
<td>3.29</td>
<td>3.23</td>
<td>3.83</td>
<td>2.51</td>
<td>3.47</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">CogVLM2-Video</td>
<td>12B</td>
<td>-</td>
<td>-</td>
<td>3.49</td>
<td><strong>3.46</strong></td>
<td>3.23</td>
<td><strong>2.98</strong></td>
<td><strong>3.64</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LongVA</td>
<td>7B</td>
<td>52.4</td>
<td>54.3</td>
<td>3.05</td>
<td>3.09</td>
<td>3.77</td>
<td>2.44</td>
<td><strong>3.64</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternVL2-8B</td>
<td>8B</td>
<td>54.0</td>
<td>56.9</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">InternLM-XComposer-2.5</td>
<td>8B</td>
<td>55.8</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left">LLaVA-NeXT-Video</td>
<td>32B</td>
<td>60.2</td>
<td>63.0</td>
<td>3.48</td>
<td>3.37</td>
<td><strong>3.95</strong></td>
<td>2.64</td>
<td>3.28</td>
</tr>
<tr style="background-color: #e6f2ff;">
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
<td>8B</td>
<td><strong>60.9</strong></td>
<td><strong>63.6</strong></td>
<td><strong>3.59</strong></td>
<td>3.28</td>
<td>3.93</td>
<td>2.73</td>
<td>3.62</td>
</tr>
</tbody>
</table>
</div>
</details>
<details>
<summary>Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA.</summary>
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>Shot</th>
<th>TextVQA val</th>
<th>VizWiz test-dev</th>
<th>VQAv2 test-dev</th>
<th>OK-VQA val</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td align="left" nowrap="nowrap" rowspan="3">Flamingo</td>
<td rowspan="3">80B</td>
<td>0*</td>
<td>35.0</td>
<td>31.6</td>
<td>56.3</td>
<td>40.6</td>
</tr>
<tr>
<td>4</td>
<td>36.5</td>
<td>39.6</td>
<td>63.1</td>
<td><strong>57.4</strong></td>
</tr>
<tr>
<td>8</td>
<td>37.3</td>
<td>44.8</td>
<td>65.6</td>
<td>57.5</td>
</tr>
<tr>
<td align="left" nowrap="nowrap" rowspan="3">IDEFICS</td>
<td rowspan="3">80B</td>
<td>0*</td>
<td>30.9</td>
<td>36.0</td>
<td>60.0</td>
<td>45.2</td>
</tr>
<tr>
<td>4</td>
<td>34.3</td>
<td>40.4</td>
<td>63.6</td>
<td>52.4</td>
</tr>
<tr>
<td>8</td>
<td>35.7</td>
<td>46.1</td>
<td>64.8</td>
<td>55.1</td>
</tr>
<tr>
<td align="left" nowrap="nowrap" rowspan="3">OmniCorpus</td>
<td rowspan="3">7B</td>
<td>0*</td>
<td>43.0</td>
<td>49.8</td>
<td>63.2</td>
<td>45.5</td>
</tr>
<tr>
<td>4</td>
<td>45.4</td>
<td>51.3</td>
<td>64.5</td>
<td>46.5</td>
</tr>
<tr>
<td>8</td>
<td>45.6</td>
<td>52.2</td>
<td>64.7</td>
<td>46.6</td>
</tr>
<tr>
<td align="left" nowrap="nowrap" rowspan="3">Emu2</td>
<td rowspan="3">37B</td>
<td>0</td>
<td>26.4</td>
<td>40.4</td>
<td>33.5</td>
<td>26.7</td>
</tr>
<tr>
<td>4</td>
<td>48.2</td>
<td>54.6</td>
<td>67.0</td>
<td>53.2</td>
</tr>
<tr>
<td>8</td>
<td>49.3</td>
<td>54.7</td>
<td>67.8</td>
<td>54.1</td>
</tr>
<tr>
<td align="left" nowrap="nowrap" rowspan="2">MM1</td>
<td rowspan="2">30B</td>
<td>0</td>
<td>26.2</td>
<td>40.4</td>
<td>48.9</td>
<td>26.7</td>
</tr>
<tr>
<td>8</td>
<td>49.3</td>
<td>54.7</td>
<td><strong>70.9</strong></td>
<td>54.1</td>
</tr>
<tr style="background-color: #e6f2ff;">
<td align="left" nowrap="nowrap" rowspan="3">MiniCPM-V 2.6<sup>+</sup></td>
<td rowspan="3">8B</td>
<td>0</td>
<td>43.9</td>
<td>33.8</td>
<td>45.4</td>
<td>23.9</td>
</tr>
<tr style="background-color: #e6f2ff;">
<td>4</td>
<td>63.6</td>
<td>60.5</td>
<td>65.5</td>
<td>50.1</td>
</tr>
<tr style="background-color: #e6f2ff;">
<td>8</td>
<td><strong>64.6</strong></td>
<td><strong>63.4</strong></td>
<td>68.2</td>
<td>51.4</td>
</tr>
</tbody>
</table>
</div>
* denotes zero image shot and two additional text shots following Flamingo.
<sup>+</sup> We evaluate the pretraining ckpt without SFT.
</details>
### Examples <!-- omit in toc -->
<div style="display: flex; flex-direction: column; align-items: center;">
<img src="../assets/minicpmv2_6/multi_img-bike.png" alt="Bike" style="margin-bottom: 5px;">
<img src="../assets/minicpmv2_6/multi_img-menu.png" alt="Menu" style="margin-bottom: 5px;">
<img src="../assets/minicpmv2_6/multi_img-code.png" alt="Code" style="margin-bottom: 5px;">
<img src="../assets/minicpmv2_6/ICL-Mem.png" alt="Mem" style="margin-bottom: 5px;">
<img src="../assets/minicpmv2_6/multiling-medal.png" alt="medal" style="margin-bottom: 10px;">
</div>
<details>
<summary>Click to view more cases.</summary>
<div style="display: flex; flex-direction: column; align-items: center;">
<img src="../assets/minicpmv2_6/ICL-elec.png" alt="elec" style="margin-bottom: 5px;">
<img src="../assets/minicpmv2_6/multiling-olympic.png" alt="Menu" style="margin-bottom: 10px;">
</div>
</details>
We deploy MiniCPM-V 2.6 on end devices. The demo video is the raw screen recording on a iPad Pro without edition.
<table align="center">
<p align="center">
<img src="../assets/gif_cases/ai.gif" width=32%/>
&nbsp;&nbsp;&nbsp;&nbsp;
<img src="../assets/gif_cases/beer.gif" width=32%/>
</p>
</table>
<table align="center">
<p align="center">
<img src="../assets/gif_cases/ticket.gif" width=32%/>
&nbsp;&nbsp;&nbsp;&nbsp;
<img src="../assets/gif_cases/wfh.gif" width=32%/>
</p>
</table>
<table align="center">
<p align="center">
<video src="https://github.com/user-attachments/assets/21f4b818-ede1-4822-920e-91281725c830" width="360" /> </video>
<!-- <video src="https://github.com/user-attachments/assets/c835f757-206b-4d9c-8e36-70d67b453628" width="360" /> </video> -->
</p>
</table>
</details>
### Multi-turn Conversation
<div align="center">
<img src="../assets/airplane.jpeg" width="500px">
</div>
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
torch.manual_seed(0)
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
image = Image.open('./assets/airplane.jpeg').convert('RGB')
# First round chat
question = "Tell me the model of this aircraft."
msgs = [{'role': 'user', 'content': [image, question]}]
answer = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
# Second round chat
# pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": [answer]})
msgs.append({"role": "user", "content": ["Introduce something about Airbus A380."]})
answer = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
You could get the following output:
```
"The aircraft in the image is an Airbus A380, which can be identified by its large size, double-deck structure, and the distinctive shape of its wings and engines. The A380 is a wide-body aircraft known for being the world's largest passenger airliner, designed for long-haul flights. It has four engines, which are characteristic of large commercial aircraft. The registration number on the aircraft can also provide specific information about the model if looked up in an aviation database."
"The Airbus A380 is a double-deck, wide-body, four-engine jet airliner made by Airbus. It is the world's largest passenger airliner and is known for its long-haul capabilities. The aircraft was developed to improve efficiency and comfort for passengers traveling over long distances. It has two full-length passenger decks, which can accommodate more passengers than a typical single-aisle airplane. The A380 has been operated by airlines such as Lufthansa, Singapore Airlines, and Emirates, among others. It is widely recognized for its unique design and significant impact on the aviation industry."
```
#### Multi-image Understanding
<details>
<summary> Click to view Python example of MiniCPM-V 2.6 multi-image understanding </summary>
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
image1 = Image.open('image1.jpg').convert('RGB')
image2 = Image.open('image2.jpg').convert('RGB')
question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.'
msgs = [{'role': 'user', 'content': [image1, image2, question]}]
answer = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
</details>
#### Few-shot In-Context-Learning
<details>
<summary> Click to view Python example of MiniCPM-V 2.6 few-shot in-context-learning example </summary>
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
question = "production date"
image1 = Image.open('example1.jpg').convert('RGB')
answer1 = "2023.08.04"
image2 = Image.open('example2.jpg').convert('RGB')
answer2 = "2007.04.24"
image_test = Image.open('test.jpg').convert('RGB')
msgs = [
{'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]},
{'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]},
{'role': 'user', 'content': [image_test, question]}
]
answer = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer
)
print(answer)
```
</details>
#### Video understanding
<details>
<summary> Click to view Python example of MiniCPM-V 2.6 video understanding </summary>
```python
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
from decord import VideoReader, cpu # pip install decord
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
model = model.eval().cuda()
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
MAX_NUM_FRAMES=64 # if cuda OOM set a smaller number
def encode_video(video_path):
def uniform_sample(l, n):
gap = len(l) / n
idxs = [int(i * gap + gap / 2) for i in range(n)]
return [l[i] for i in idxs]
vr = VideoReader(video_path, ctx=cpu(0))
sample_fps = round(vr.get_avg_fps() / 1) # FPS
frame_idx = [i for i in range(0, len(vr), sample_fps)]
if len(frame_idx) > MAX_NUM_FRAMES:
frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)
frames = vr.get_batch(frame_idx).asnumpy()
frames = [Image.fromarray(v.astype('uint8')) for v in frames]
print('num frames:', len(frames))
return frames
video_path="video_test.mp4"
frames = encode_video(video_path)
question = "Describe the video"
msgs = [
{'role': 'user', 'content': frames + [question]},
]
# Set decode params for video
params = {}
params["use_image_id"] = False
params["max_slice_nums"] = 2 # 如果cuda OOM且视频分辨率大于448*448可设为1
answer = model.chat(
image=None,
msgs=msgs,
tokenizer=tokenizer,
**params
)
print(answer)
```
</details>

183
docs/omnilmm.md Normal file
View File

@@ -0,0 +1,183 @@
## OmniLMM-12B
> OmniLMM-12B 发布于本项目早期。推荐您使用我们[最新发布的模型](./README_zh.md),以获得更高效的推理和更强大的性能体验。
> 归档时间2024-05-19
**OmniLMM-12B** 是当前系列中性能最佳的版本。该模型基于EVA02-5B和Zephyr-7B-β初始化构建并使用perceiver resampler连接采用了课程学习的方法在多模态数据上进行训练。该模型具有三个特点
- 🔥 **性能领先。**
OmniLMM-12B 相比其他同规模模型在多个基准测试中取得**领先的性能**(包括 MME、MMBench、SEED-Bench 等),模型掌握了较为丰富的多模态世界知识。
- 🏆 **行为可信。**
多模态大模型的幻觉问题备受关注模型经常生成和图像中的事实不符的文本例如确信地描述图片中并不存在的物体。OmniLMM-12B是 **第一个通过多模态 RLHF 对齐的综合能力优秀的开源多模态大模型**(借助 [RLHF-V](https://rlhf-v.github.io/) [CVPR'24] 系列技术)。该模型在 [MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench) 幻觉评测基准上达到**开源模型最佳水平**,并在 [Object HalBench](https://arxiv.org/abs/2312.00849) 中**优于GPT-4V**。
- 🕹 **实时多模态交互。**
我们尝试结合OmniLMM-12B和GPT-3.5 (纯文本模型) ,实现**实时多模态交互助手**。该模型接受来自摄像头的视频流,并借助工具处理语音输入输出。虽然还很初步,我们发现该模型无需视频编辑可以**复现Gemini演示视频中的一些有趣例子**。
### 评测结果 <!-- omit in toc -->
<div align="center">
<img src=assets/radar_omnilmm12b.png width=66% />
</div>
<details>
<summary> MME, MMBench, MMMU, MMBench, MMHal-Bench, Object HalBench, SeedBench, LLaVA Bench W, MathVista 上的详细评测结果。 </summary>
<table>
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>MME</th>
<th nowrap="nowrap">MMB dev (en)</th>
<th nowrap="nowrap" >MMMU val</th>
<th nowrap="nowrap" >MMHal-Bench</th>
<th nowrap="nowrap" >Object HalBench</th>
<th nowrap="nowrap" >SeedBench-I</th>
<th>MathVista</th>
<th nowrap="nowrap" >LLaVA Bench</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td align="left">GPT-4V†</td>
<td>-</td>
<td>1771.5</td>
<td>75.1 </td>
<td>56.8</td>
<td>3.53 / 70.8</td>
<td>86.4 / 92.7</td>
<td>71.6 </td>
<td>47.8 </td>
<td>93.1 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen-VL-Plus†</td>
<td>-</td>
<td>2183.4</td>
<td>66.2 </td>
<td>45.2</td>
<td>- </td>
<td>- </td>
<td>65.7 </td>
<td>36.0 </td>
<td>73.7 </td>
</tr>
<tr>
<td align="left">Yi-VL 6B</td>
<td align="right">6.7B </td>
<td>1915.1 </td>
<td>68.6 </td>
<td>40.3 </td>
<td>- </td>
<td>- </td>
<td>67.5 </td>
<td>28.8 </td>
<td>51.9 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen-VL-Chat</td>
<td align="right">9.6B</td>
<td>1860.0</td>
<td>60.6 </td>
<td>35.9</td>
<td>2.93 / 59.4</td>
<td>56.2 / 80.0</td>
<td>64.8 </td>
<td>33.8 </td>
<td>67.7 </td>
</tr>
<tr>
<td align="left" >CogVLM-Chat</td>
<td align="right">17.4B</td>
<td>1736.6</td>
<td>63.7 </td>
<td>32.1 </td>
<td>2.68 / 52.1 </td>
<td>73.6 / 87.4 </td>
<td>68.8 </td>
<td>34.7 </td>
<td>73.9 </td>
</tr>
<tr>
<td align="left" >LLaVA 1.5</td>
<td align="right">13.6B </td>
<td>1808.4 </td>
<td>68.2 </td>
<td>36.4 </td>
<td>2.71 / 51.0 </td>
<td>53.7 / 77.4 </td>
<td>68.1 </td>
<td>26.4 </td>
<td>64.6 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>OmniLMM-12B</b></td>
<td align="right">11.6B </td>
<td>1935.8 </td>
<td>71.6 </td>
<td>40.7 </td>
<td>3.45 / 68.8 </td>
<td>90.3 / 95.5 </td>
<td>71.1 </td>
<td>34.9 </td>
<td>72.0 </td>
</tr>
</tbody>
</table>
<small>†: 闭源模型</small>
<br>
</details>
### 典型示例 <!-- omit in toc -->
<table align="center" >
<p align="center" >
<img src="assets/omnilmm-12b-examples_2.png" />
</p>
</table>
我们结合 OmniLMM-12B 和 ChatGPT-3.5 (纯文本模型) 尝试构建 **实时多模态交互助手**. OmniLMM-12B 将视频帧转为对应的图像描述并输入给ChatGPT-3.5来生成对用户指令的响应。演示视频未经编辑。
<div align="center" >
<video controls src="https://github.com/OpenBMB/OmniLMM/assets/157115220/8fec13bf-bb47-4bf8-8f8c-d0b716a964ec" type="video/mp4" width=80%/>
</div>
## Online Demo
欢迎通过以下链接使用我们的网页端推理服务: [OmniLMM-12B](http://120.92.209.146:8081) [MiniCPM-V 2.0](http://120.92.209.146:80).
## 安装
1. 克隆我们的仓库并跳转到相应目录
```bash
git clone https://github.com/OpenBMB/MiniCPM-V.git
cd MiniCPM-V
```
1. 创建 conda 环境
```Shell
conda create -n MiniCPMV python=3.10 -y
conda activate MiniCPMV
```
3. 安装依赖
```shell
pip install -r requirements.txt
```
## 推理
### 模型库
| 模型 | 简介 | 下载链接 |
|:----------------------|:-------------------|:---------------:|
| OmniLMM-12B | 性能最强的版本 | [🤗](https://huggingface.co/openbmb/OmniLMM-12B) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/OmniLMM-12B/files) |

155
docs/omnilmm_en.md Normal file
View File

@@ -0,0 +1,155 @@
## OmniLMM-12B
> OmniLMM-12B is released at early time of this project. We recommond you to use our [recently released models](./README.md), for better performance and efficiency.
> Archieve at: 2024-05-19
**OmniLMM-12B** is the most capable version. The model is built based on EVA02-5B and Zephyr-7B-β, connected with a perceiver resampler layer, and trained on multimodal data in a curriculum fashion. The model has three notable features:
- 🔥 **Strong Performance.**
OmniLMM-12B achieves **leading performance** among models with comparable sizes, surpassing established LMMs on multiple benchmarks (including MME, MMBench, SEED-Bench, etc). The model also endows rich multi-modal world knowledge.
- 🏆 **Trustworthy Behavior.**
LMMs are known for suffering from hallucination, often generating text that is not factually grounded in images (e.g., faithfully describing non-existing objects in images). OmniLMM-12B is **the first state-of-the-art open-source LMM aligned via multimodal RLHF for trustworthy behavior** (using the recent [RLHF-V](https://rlhf-v.github.io/) technique). It **ranks #1** among open-source models on [MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench), and **outperforms GPT-4V** on [Object HalBench](https://arxiv.org/abs/2312.00849).
- 🕹 **Real-time Multimodal Interaction.**
We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can **replicate some of the fun cases shown in the Gemini Demo video, without any video edition**.
### Evaluation <!-- omit in toc -->
<div align="center">
<img src=assets/radar_omnilmm12b.png width=66% />
</div>
<details>
<summary>Click to view results on MME, MMBench, MMMU, MMBench, MMHal-Bench, Object HalBench, SeedBench, LLaVA Bench, MathVista. </summary>
<table>
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>MME</th>
<th nowrap="nowrap">MMB dev (en)</th>
<th nowrap="nowrap" >MMMU val</th>
<th nowrap="nowrap" >MMHal-Bench</th>
<th nowrap="nowrap" >Object HalBench</th>
<th nowrap="nowrap" >SeedBench-I</th>
<th>MathVista</th>
<th nowrap="nowrap" >LLaVA Bench</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td align="left">GPT-4V†</td>
<td>-</td>
<td>1771.5</td>
<td>75.1 </td>
<td>56.8</td>
<td>3.53 / 70.8</td>
<td>86.4 / 92.7</td>
<td>71.6 </td>
<td>47.8 </td>
<td>93.1 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Qwen-VL-Plus†</td>
<td>-</td>
<td>2183.4</td>
<td>66.2 </td>
<td>45.2</td>
<td>- </td>
<td>- </td>
<td>65.7 </td>
<td>36.0 </td>
<td>73.7 </td>
</tr>
<tr>
<td align="left">Yi-VL 6B</td>
<td align="right">6.7B </td>
<td>1915.1 </td>
<td>68.6 </td>
<td>40.3 </td>
<td>- </td>
<td>- </td>
<td>67.5 </td>
<td>28.8 </td>
<td>51.9 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen-VL-Chat</td>
<td align="right">9.6B</td>
<td>1860.0</td>
<td>60.6 </td>
<td>35.9</td>
<td>2.93 / 59.4</td>
<td>56.2 / 80.0</td>
<td>64.8 </td>
<td>33.8 </td>
<td>67.7 </td>
</tr>
<tr>
<td align="left" >CogVLM-Chat</td>
<td align="right">17.4B</td>
<td>1736.6</td>
<td>63.7 </td>
<td>32.1 </td>
<td>2.68 / 52.1 </td>
<td>73.6 / 87.4 </td>
<td>68.8 </td>
<td>34.7 </td>
<td>73.9 </td>
</tr>
<tr>
<td align="left" >LLaVA 1.5</td>
<td align="right">13.6B </td>
<td>1808.4 </td>
<td>68.2 </td>
<td>36.4 </td>
<td>2.71 / 51.0 </td>
<td>53.7 / 77.4 </td>
<td>68.1 </td>
<td>26.4 </td>
<td>64.6 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>OmniLMM-12B</b></td>
<td align="right">11.6B </td>
<td>1935.8 </td>
<td>71.6 </td>
<td>40.7 </td>
<td>3.45 / 68.8 </td>
<td>90.3 / 95.5 </td>
<td>71.1 </td>
<td>34.9 </td>
<td>72.0 </td>
</tr>
</tbody>
</table>
<small>†: Proprietary models</small>
<br>
</details>
### Examples <!-- omit in toc -->
<table align="center" >
<p align="center" >
<img src="assets/omnilmm-12b-examples_2.png" />
</p>
</table>
We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. Video frames are described in text using OmniLMM-12B, and ChatGPT 3.5 (text-only) is employed to generate response according to the descriptions and user prompts. The demo video is a raw recording without edition.
<div align="center" >
<video controls src="https://github.com/OpenBMB/OmniLMM/assets/157115220/485a8f52-fb4d-4eca-8fee-506347efcfc6" type="video/mp4" width=80%/>
</div>
### Model Zoo
| Model | Description | Download Link |
|:----------------------|:-------------------|:---------------:|
| OmniLMM-12B | The most capable version with leading performance. | [🤗](https://huggingface.co/openbmb/OmniLMM-12B) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/OmniLMM-12B/files) |