feat(minicpm-v): update MiniCPM-V 2.0

This commit is contained in:
wangchongyi
2024-04-11 12:55:59 +08:00
parent ef46843319
commit 7979f99dd4
13 changed files with 908 additions and 343 deletions

523
README.md
View File

@@ -1,35 +1,33 @@
<div align="center">
<!-- <!-- <h1 style="color: #33A6B8; font-family: Helvetica"> OmniLMM </h1> -->
<img src="./assets/title-2.png" width="200em" ></img>
<img src="./assets/minicpmv-omnilmm.png" width="400em" ></img>
**Large multi-modal models for strong performance and efficient deployment**
<p align="center">
OmniLMM-3B <a href="https://huggingface.co/openbmb/MiniCPM-V/">🤗</a> <a href="http://120.92.209.146:80/">🤖</a> |
MiniCPM-V 2.0 <a href="https://huggingface.co/openbmb/MiniCPM-V-2.0/">🤗</a> <a href="http://120.92.209.146:80/">🤖</a> |
OmniLMM-12B <a href="https://huggingface.co/openbmb/OmniLMM-12B/">🤗</a> <a href="http://120.92.209.146:8081">🤖</a>
</p>
</div>
**OmniLMM** is a family of open-source large multimodal models (LMMs) adept at vision & language modeling. The model processes images and text inputs and delivers high-quality text outputs. We release two featured versions of OmniLMM that are targeted at **strong performance and efficient deployment**:
**MiniCPM-V** and **OmniLMM** are a family of open-source large multimodal models (LMMs) adept at vision & language modeling. The models process images and text inputs and deliver high-quality text outputs. We release two featured versions that are targeted at **strong performance and efficient deployment**:
- **MiniCPM-V 2.8B**: State-of-the-art end-side large multimodal models. Our latest MiniCPM-V 2.0 can accept 1.8 million pixels (e.g., 1344x1344) images at any aspect ratio, and is adept at OCR capability. It achieves comparable performance with Gemini Pro in understanding scene-text and matches GPT-4V in preventing hallucinations.
- **OmniLMM 12B**: The most capable version with leading performance among comparable-sized models on multiple benchmarks. The model also achieves state-of-the-art performance in trustworthy behaviors, with even less hallucination than GPT-4V.
- **OmniLMM-12B**: Leading performance among comparable-sized models on multiple benchmarks.
- **OmniLMM-3B**: Frontier end device multi-modal conversation with promising performance.
[中文文档](./README_zh.md)
## Contents
- [Contents](#contents)
## Contents <!-- omit in toc -->
- [MiniCPM-V 2.8B](#minicpm-v-28b)
- [OmniLMM-12B](#omnilmm-12b)
- [Evaluation](#evaluation)
- [Examples](#examples)
- [OmniLMM-3B](#omnilmm-3b)
- [Evaluation](#evaluation-1)
- [Examples](#examples-1)
- [Demo](#demo)
- [Install](#install)
- [Inference](#inference)
@@ -38,9 +36,293 @@
- [Inference on Mac](#inference-on-mac)
- [Deployment on Mobile Phone](#deployment-on-mobile-phone)
- [TODO](#todo)
- [Model License](#model-license)
- [Statement](#statement)
- [Institutions](#institutions)
## MiniCPM-V 2.8B
**MiniCPM-V 2.8B** is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Our latest version, MiniCPM-V 2.0 has several notable features.
- 🔥 **State-of-the-art Performance.**
MiniCPM-V 2.0 achieves **state-of-the-art performance** on multiple benchmarks (including OCRBench, TextVQA, MME, MMB, MathVista, etc) among models under 7B parameters. It even **outperforms strong Qwen-VL-Chat 9.6B, CogVLM-Chat 17.4B, and Yi-VL 34B on OpenCompass, a comprehensive evaluation over 11 popular benchmarks**. Notably, MiniCPM-V 2.0 shows **strong OCR capability**, achieving **comparable performance to Gemini Pro in scene-text understanding**, and **state-of-the-art performance on OCRBench** among open-source models.
- 🏆 **Trustworthy Behavior.**
LMMs are known for suffering from hallucination, often generating text not factually grounded in images. MiniCPM-V 2.0 is **the first end-side LMM aligned via multimodal RLHF for trustworthy behavior** (using the recent [RLHF-V](https://rlhf-v.github.io/) [CVPR'24] series technique). This allows the model to **match GPT-4V in preventing hallucinations** on Object HalBench.
- 🌟 **High-Resolution Images at Any Aspect Raito.**
MiniCPM-V 2.0 can accept **1.8 million pixels (e.g., 1344x1344) images at any aspect ratio**. This enables better perception of fine-grained visual information such as small objects and optical characters, which is achieved via a recent technique from [LLaVA-UHD](https://arxiv.org/pdf/2403.11703.pdf).
- ⚡️ **High Efficiency.**
MiniCPM-V 2.0 can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. For visual encoding, we compress the image representations into much fewer tokens via a perceiver resampler. This allows MiniCPM-V 2.0 to operate with **favorable memory cost and speed during inference even when dealing with high-resolution images**.
- 🙌 **Bilingual Support.**
MiniCPM-V 2.0 **supports strong bilingual multimodal capabilities in both English and Chinese**. This is enabled by generalizing multimodal capabilities across languages, a technique from [VisCPM](https://arxiv.org/abs/2308.12038) [ICLR'24].
### Evaluation <!-- omit in toc -->
<div align="center">
<img src=assets/minicpmv-2-peformance.png width=66% />
</div>
<details>
<summary>Click to view results on TextVQA, DocVQA, OCRBench, OpenCompass, MME, MMBench, MMMU, MathVista, LLaVA Bench, Object HalBench. </summary>
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th>TextVQA val</th>
<th>DocVQA test</th>
<th>OCRBench</th>
<th>OpenCompass</th>
<th nowrap="nowrap" >MME</th>
<th>MMB dev(en)</th>
<th>MMB dev(zh)</th>
<th>MMMU val</th>
<th>MathVista</th>
<th>LLaVA Bench</th>
<th nowrap="nowrap">Object HalBench</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td colspan="12" align="left"><strong>Proprietary models</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left">Gemini Pro Vision</td>
<td>- </td>
<td>74.6</td>
<td>88.1</td>
<td>680</td>
<td>63.8</td>
<td>2148.9</td>
<td>75.2</td>
<td>74.0</td>
<td>48.9</td>
<td>45.8</td>
<td>79.9</td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left">GPT-4V</td>
<td>- </td>
<td>78.0</td>
<td>88.4</td>
<td>645</td>
<td>63.2</td>
<td>1771.5</td>
<td>75.1</td>
<td>75.0</td>
<td>53.8</td>
<td>47.8</td>
<td>93.1</td>
<td>86.4 / 92.7</td>
</tr>
<tr>
<td colspan="12" align="left"><strong>Open-source models 6B~34B</strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Yi-VL-6B</td>
<td align="right" >6.7B</td>
<td>45.5*</td>
<td>17.1*</td>
<td>290</td>
<td>49.3</td>
<td>1915.1 </td>
<td>68.6 </td>
<td>68.3 </td>
<td>40.3 </td>
<td>28.8 </td>
<td>51.9 </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen-VL-Chat</td>
<td align="right" >9.6B</td>
<td>61.5</td>
<td>62.6</td>
<td>488 </td>
<td>52.1 </td>
<td>1860.0 </td>
<td>60.6 </td>
<td>56.7 </td>
<td>37.0 </td>
<td>33.8 </td>
<td>67.7 </td>
<td>56.2 / 80.0</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Yi-VL-34B</td>
<td align="right" >34B</td>
<td>43.4*</td>
<td>16.9*</td>
<td>290</td>
<td>52.6 </td>
<td>2050.2</td>
<td>71.1</td>
<td>71.4</td>
<td>45.1</td>
<td>30.7</td>
<td>62.3</td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >DeepSeek-VL-7B</td>
<td align="right" >7.3B</td>
<td>64.7*</td>
<td>47.0* </td>
<td>435</td>
<td>55.6 </td>
<td>1765.4 </td>
<td>74.1 </td>
<td>72.8 </td>
<td>38.3 </td>
<td>36.8</td>
<td>77.8 </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >TextMonkey</td>
<td align="right" >9.7B</td>
<td>64.3</td>
<td>66.7 </td>
<td>558</td>
<td>- </td>
<td>- </td>
<td>- </td>
<td>- </td>
<td>- </td>
<td>-</td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >CogVLM-Chat</td>
<td align="right" >17.4B</td>
<td>70.4</td>
<td>33.3*</td>
<td>590 </td>
<td>52.5 </td>
<td>1736.6 </td>
<td>63.7 </td>
<td>53.8 </td>
<td>37.3 </td>
<td>34.7 </td>
<td>73.9 </td>
<td>73.6 / 87.4 </td>
</tr>
<tr>
<td colspan="12" align="left"><strong>Open-source models 1B~3B </strong></td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >DeepSeek-VL-1.3B</td>
<td align="right" >1.7B</td>
<td>58.4*</td>
<td>37.9*</td>
<td>413</td>
<td>46.0 </td>
<td>1531.6 </td>
<td>64.0 </td>
<td>61.2 </td>
<td>33.8 </td>
<td>29.4 </td>
<td>51.1 </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >MobileVLM V2</td>
<td align="right" >3.1B</td>
<td>57.5</td>
<td>19.4*</td>
<td>-</td>
<td>-</td>
<td>1440.5(P) </td>
<td>63.2 </td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
<td>-</td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Mini-Gemini</td>
<td align="right" >2.2B</td>
<td>56.2</td>
<td>34.2*</td>
<td>-</td>
<td>-</td>
<td>1653.0 </td>
<td>59.8 </td>
<td>- </td>
<td>31.7 </td>
<td>-</td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >MiniCPM-V</td>
<td align="right" >2.8B </td>
<td>60.6</td>
<td>38.2 </td>
<td>366</td>
<td>47.6</td>
<td>1650.2 </td>
<td>67.9 </td>
<td>65.3 </td>
<td><strong>38.3</strong></td>
<td>28.9</td>
<td>51.3 </td>
<td>78.4 / 88.5 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><strong>MiniCPM-V 2.0</strong></td>
<td align="right" >2.8B </td>
<td><strong>74.1</strong></td>
<td><strong>71.9</strong> </td>
<td><strong>605</strong></td>
<td><strong>55.0</strong></td>
<td><strong>1808.6</strong> </td>
<td><strong>69.6</strong> </td>
<td><strong>68.1</strong> </td>
<td>38.2 </td>
<td><strong>38.7</strong></td>
<td><strong>69.2</strong> </td>
<td><strong>85.5 / 92.2 </strong></td>
</tr>
</tbody>
</table>
</div>
* We evaluate the officially released checkpoint by ourselves.
</details>
### Examples <!-- omit in toc -->
<table align="center">
<p align="center">
<img src="assets/minicpmv2-cases_2.png" width=95%/>
</p>
</table>
We deploy MiniCPM-V 2.0 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro without edition.
<table align="center">
<p align="center">
<img src="assets/gif_cases/station.gif" width=36%/>
<img src="assets/gif_cases/english_menu.gif" width=36%/>
</p>
</table>
### MiniCPM-V 1.0 <!-- omit in toc -->
Please see the info about MiniCPM-V 1.0 [here](./minicpm_v1.md).
## OmniLMM-12B
**OmniLMM-12B** is the most capable version. The model is built based on EVA02-5B and Zephyr-7B-β, connected with a perceiver resampler layer, and trained on multimodal data in a curriculum fashion. The model has three notable features:
@@ -58,12 +340,12 @@
We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can **replicate some of the fun cases shown in the Gemini Demo video, without any video edition**.
### Evaluation
### Evaluation <!-- omit in toc -->
<div align="center">
<img src=assets/eval_radar.png width=66% />
<img src=assets/radar_omnilmm12b.png width=66% />
</div>
<details>
<summary>Click to view results on MME, MMBench, MMMU, MMBench, MMHal-Bench, Object HalBench, SeedBench, LLaVA Bench W, MathVista. </summary>
<summary>Click to view results on MME, MMBench, MMMU, MMBench, MMHal-Bench, Object HalBench, SeedBench, LLaVA Bench, MathVista. </summary>
<table>
<thead>
@@ -77,14 +359,14 @@
<th nowrap="nowrap" >Object HalBench</th>
<th nowrap="nowrap" >SeedBench-I</th>
<th>MathVista</th>
<th nowrap="nowrap" >LLaVA Bench W</th>
<th nowrap="nowrap" >LLaVA Bench</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td align="left">GPT-4V†</td>
<td>-</td>
<td>1409</td>
<td>1771.5</td>
<td>75.1 </td>
<td>56.8</td>
<td>3.53 / 70.8</td>
@@ -96,7 +378,7 @@
<tr>
<td nowrap="nowrap" align="left">Qwen-VL-Plus†</td>
<td>-</td>
<td>1681</td>
<td>2183.4</td>
<td>66.2 </td>
<td>45.2</td>
<td>- </td>
@@ -108,19 +390,19 @@
<tr>
<td align="left">Yi-VL 6B</td>
<td align="right">6.7B </td>
<td>- </td>
<td>68.2 </td>
<td>39.1 </td>
<td>1915.1 </td>
<td>68.6 </td>
<td>40.3 </td>
<td>- </td>
<td>- </td>
<td>66.1 </td>
<td>28.0 </td>
<td>39.9 </td>
<td>67.5 </td>
<td>28.8 </td>
<td>51.9 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen-VL-Chat</td>
<td align="right">9.6B</td>
<td>1488</td>
<td>1860.0</td>
<td>60.6 </td>
<td>35.9</td>
<td>2.93 / 59.4</td>
@@ -130,9 +412,9 @@
<td>67.7 </td>
</tr>
<tr>
<td align="left" >CogVLM</td>
<td align="left" >CogVLM-Chat</td>
<td align="right">17.4B</td>
<td>1438</td>
<td>1736.6</td>
<td>63.7 </td>
<td>32.1 </td>
<td>2.68 / 52.1 </td>
@@ -144,7 +426,7 @@
<tr>
<td align="left" >LLaVA 1.5</td>
<td align="right">13.6B </td>
<td>1531 </td>
<td>1808.4 </td>
<td>68.2 </td>
<td>36.4 </td>
<td>2.71 / 51.0 </td>
@@ -156,7 +438,7 @@
<tr>
<td nowrap="nowrap" align="left" ><b>OmniLMM-12B</b></td>
<td align="right">11.6B </td>
<td>1637 </td>
<td>1935.8 </td>
<td>71.6 </td>
<td>40.7 </td>
<td>3.45 / 68.8 </td>
@@ -171,7 +453,7 @@
<br>
</details>
### Examples
### Examples <!-- omit in toc -->
<table align="center" >
<p align="center" >
@@ -186,132 +468,24 @@ We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal
<video controls src="https://github.com/OpenBMB/OmniLMM/assets/157115220/485a8f52-fb4d-4eca-8fee-506347efcfc6" type="video/mp4" width=80%/>
</div>
## OmniLMM-3B
**OmniLMM-3B** (i.e., MiniCPM-V) is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of OmniLMM-3B include:
- ⚡️ **High Efficiency.**
OmniLMM-3B can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows OmniLMM-3B to operate with **much less memory cost and higher speed during inference**.
- 🔥 **Promising Performance.**
OmniLMM-3B achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**.
- 🙌 **Bilingual Support.**
OmniLMM-3B is **the first end-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from the ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038).
### Evaluation
<div align="center">
<table style="margin: 0px auto;">
<thead>
<tr>
<th align="left">Model</th>
<th>Size</th>
<th nowrap="nowrap" >Visual Tokens</th>
<th>MME</th>
<th nowrap="nowrap" >MMB dev (en)</th>
<th nowrap="nowrap" >MMB dev (zh)</th>
<th nowrap="nowrap" >MMMU val</th>
<th nowrap="nowrap" >CMMMU val</th>
</tr>
</thead>
<tbody align="center">
<tr>
<td align="left">LLaVA-Phi</td>
<td align="right">3B</td>
<td>576</td>
<td>1335</td>
<td>59.8</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left">MobileVLM</td>
<td align="right">3B</td>
<td>144</td>
<td>1289</td>
<td>59.6</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Imp-v1</td>
<td align="right">3B</td>
<td>576</td>
<td>1434</td>
<td>66.5</td>
<td>- </td>
<td>- </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >Qwen-VL-Chat</td>
<td align="right" >9.6B</td>
<td>256</td>
<td>1487</td>
<td>60.6 </td>
<td>56.7 </td>
<td>35.9 </td>
<td>30.7 </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" >CogVLM</td>
<td align="right">17.4B </td>
<td>1225</td>
<td>1438 </td>
<td>63.7 </td>
<td>53.8 </td>
<td>32.1 </td>
<td>- </td>
</tr>
<tr>
<td nowrap="nowrap" align="left" ><b>OmniLMM-3B</b></td>
<td align="right">3B </td>
<td>64</td>
<td>1452 </td>
<td>67.9 </td>
<td>65.3 </td>
<td>37.2 </td>
<td>32.1 </td>
</tr>
</tbody>
</table>
</div>
### Examples
We deploy OmniLLM-3B on end devices. The demo video is the raw screen recording on a OnePlus 9R without edition.
<table align="center">
<p align="center">
<img src="assets/gif_cases/蛇_cn.gif" width=36%/>
<img src="assets/gif_cases/Mushroom_en.gif" width=36%/>
</p>
</table>
## Demo
Click here to try out the Demo of [OmniLMM-12B](http://120.92.209.146:8081) and [OmniLMM-3B](http://120.92.209.146:80).
Click here to try out the Demo of [MiniCPM-V 2.0](http://120.92.209.146:80/) and [OmniLMM-12B](http://120.92.209.146:8081).
## Install
1. Clone this repository and navigate to the source folder
```bash
git clone https://github.com/OpenBMB/OmniLMM.git
cd OmniLMM
git clone https://github.com/OpenBMB/MiniCPM-V.git
cd MiniCPM-V
```
2. Create conda environment
```Shell
conda create -n OmniLMM python=3.10 -y
conda activate OmniLMM
conda create -n MiniCPM-V python=3.10 -y
conda activate MiniCPM-V
```
3. Install dependencies
@@ -325,27 +499,27 @@ pip install -r requirements.txt
### Model Zoo
| Model | Description | Download Link |
|:----------------------|:-------------------|:---------------:|
| OmniLMM-12B | The most capable version with strong performance. | [🤗](https://huggingface.co/openbmb/OmniLMM-12B) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/OmniLMM-12B/files) |
| OmniLMM-3B | The efficient version for end device deployment. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V/files) |
| MiniCPM-V 2.0 | The latest version for state-of-the-art end-side capabilities with high efficiency. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2.0) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2.0/files) |
| MiniCPM-V | The first version of MiniCPM-V. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V/files) |
| OmniLMM-12B | The most capable version with leading performance. | [🤗](https://huggingface.co/openbmb/OmniLMM-12B) &nbsp;&nbsp; [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/OmniLMM-12B/files) |
### Multi-turn Conversation
Please refer to the following codes to run `OmniLMM`.
Please refer to the following codes to run `MiniCPM-V` and `OmniLMM`.
<div align="center">
<img src="assets/worldmap_ck.jpg" width="500px">
<img src="assets/hk_OCR.jpg" width="500px">
</div>
```python
from chat import OmniLMMChat, img2base64
chat_model = OmniLMMChat('openbmb/OmniLMM-12B') # or 'openbmb/MiniCPM-V'
chat_model = OmniLMMChat('openbmb/OmniLMM-12B') # or 'openbmb/MiniCPM-V-2'
im_64 = img2base64('./assets/worldmap_ck.jpg')
im_64 = img2base64('./assets/hk_OCR.jpg')
# First round chat
msgs = [{"role": "user", "content": "What is interesting about this image?"}]
msgs = [{"role": "user", "content": "Where should I go to buy a camera?"}]
inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.chat(inputs)
@@ -354,7 +528,7 @@ print(answer)
# Second round chat
# pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": answer})
msgs.append({"role": "user", "content": "Where is China in the image"})
msgs.append({"role": "user", "content": "Where is this store in the image?"})
inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.chat(inputs)
@@ -362,15 +536,18 @@ print(answer)
```
We can obtain the following results:
```
"The interesting aspect of this image is the shape of the chicken nuggets on the pan. The nuggets are shaped like the continents of the world, which is an unusual and creative way to present the food. It adds a fun and playful element to the meal, making it more visually appealing and engaging."
"In the image, China is located on the right side of the pan. It is one of the nuggets shaped like the continents of the world, and its placement on the right side of the pan is consistent with its geographical location in the real world"
```
"You should go to the Canon store for a camera."
"The Canon store is located on the right side of the image."
```
### Inference on Mac
<details>
<summary>Click to view example, OmniLMM-3B (i.e., MiniCPM-V) can run on Mac with MPS (Apple silicon or AMD GPUs). </summary>
<summary>Click to view an example, to run MiniCPM-V 2.0 on 💻 Mac with MPS (Apple silicon or AMD GPUs). </summary>
```python
# test.py
@@ -378,14 +555,14 @@ import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True, torch_dtype=torch.bfloat16)
model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2.0', trust_remote_code=True, torch_dtype=torch.bfloat16)
model = model.to(device='mps', dtype=torch.float16)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V', trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2.0', trust_remote_code=True)
model.eval()
image = Image.open('./assets/worldmap_ck.jpg').convert('RGB')
question = 'What is interesting about this image?'
image = Image.open('./assets/hk_OCR.jpg').convert('RGB')
question = 'Where is this photo taken?'
msgs = [{'role': 'user', 'content': question}]
answer, context, _ = model.chat(
@@ -404,7 +581,7 @@ PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
</details>
### Deployment on Mobile Phone
Currently OmniLMM-3B (i.e., MiniCPM-V) can be deployed on mobile phones with Android and Harmony operating systems. 🚀 Try it out [here](https://github.com/OpenBMB/mlc-MiniCPM).
Currently MiniCPM-V 2.0 can be deployed on mobile phones with Android and Harmony operating systems. 🚀 Try it out [here](https://github.com/OpenBMB/mlc-MiniCPM).
## TODO
@@ -412,24 +589,24 @@ Currently OmniLMM-3B (i.e., MiniCPM-V) can be deployed on mobile phones with And
- [ ] Local Web-UI deployment
- [ ] Code release for real-time interactive assistant
## Model License
## Model License <!-- omit in toc -->
The code in this repo is released according to [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE)
The usage of OmniLMMs' parameters is subject to "[General Model License Agreement - Source Notes - Publicity Restrictions - Commercial License](https://github.com/OpenBMB/General-Model-License/blob/main/通用模型许可协议-来源说明-宣传限制-商业授权.md)"
The usage of MiniCPM-V's and OmniLMM's parameters is subject to "[General Model License Agreement - Source Notes - Publicity Restrictions - Commercial License](https://github.com/OpenBMB/General-Model-License/blob/main/通用模型许可协议-来源说明-宣传限制-商业授权.md)"
The parameters are fully open to acedemic research
The parameters are fully open to academic research
Please contact cpm@modelbest.cn to obtain a written authorization for commercial uses. Free commercial use is also allowed after registration.
Please contact cpm@modelbest.cn to obtain written authorization for commercial uses. Free commercial use is also allowed after registration.
## Statement
## Statement <!-- omit in toc -->
As LMMs, OmniLMMs generate contents by learning a large mount of multimodal corpora, but it cannot comprehend, express personal opinions or make value judgement. Anything generated by OmniLMMs does not represent the views and positions of the model developers
As LMMs, OmniLMMs generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by OmniLMMs does not represent the views and positions of the model developers
We will not be liable for any problems arising from the use of OmniLMM open source models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
## Institutions
## Institutions <!-- omit in toc -->
This project is developed by the following institutions: