mirror of
https://github.com/OpenBMB/MiniCPM-V.git
synced 2026-02-05 02:09:20 +08:00
update README
This commit is contained in:
240
README.md
240
README.md
@@ -55,142 +55,6 @@
|
||||
|
||||
We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can **replicate some of the fun cases shown in the Gemini Demo video, without any video edition**.
|
||||
|
||||
### Evaluation
|
||||
|
||||
<table>
|
||||
<thead>
|
||||
<tr>
|
||||
<th align="left">Model</th>
|
||||
<th>Size</th>
|
||||
<th>MME</th>
|
||||
<th nowrap="nowrap">MMB dev (en)</th>
|
||||
<th nowrap="nowrap" >MMMU val</th>
|
||||
<th nowrap="nowrap" >MMHal-Bench</th>
|
||||
<th nowrap="nowrap" >Object HalBench</th>
|
||||
<th nowrap="nowrap" >SeedBench-I</th>
|
||||
<th>MathVista</th>
|
||||
<th nowrap="nowrap" >LLaVA Bench W</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody align="center">
|
||||
<tr>
|
||||
<td align="left">GPT-4V†</td>
|
||||
<td>-</td>
|
||||
<td>1409</td>
|
||||
<td>75.1 </td>
|
||||
<td>56.8</td>
|
||||
<td>3.53 / 70.8</td>
|
||||
<td>86.4 / 92.7</td>
|
||||
<td>71.6 </td>
|
||||
<td>47.8 </td>
|
||||
<td>93.1 </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td nowrap="nowrap" align="left">Qwen-VL-Plus†</td>
|
||||
<td>-</td>
|
||||
<td>1681</td>
|
||||
<td>66.2 </td>
|
||||
<td>45.2</td>
|
||||
<td>- </td>
|
||||
<td>- </td>
|
||||
<td>65.7 </td>
|
||||
<td>36.0 </td>
|
||||
<td>73.7 </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="left">Yi-VL 6B</td>
|
||||
<td align="right">6.7B </td>
|
||||
<td>- </td>
|
||||
<td>68.2 </td>
|
||||
<td>39.1 </td>
|
||||
<td>- </td>
|
||||
<td>- </td>
|
||||
<td>66.1 </td>
|
||||
<td>28.0 </td>
|
||||
<td>39.9 </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td nowrap="nowrap" align="left" >Qwen-VL-Chat</td>
|
||||
<td align="right">9.6B</td>
|
||||
<td>1488</td>
|
||||
<td>60.6 </td>
|
||||
<td>35.9</td>
|
||||
<td>2.93 / 59.4</td>
|
||||
<td>56.2 / 80.0</td>
|
||||
<td>64.8 </td>
|
||||
<td>33.8 </td>
|
||||
<td>67.7 </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="left" >CogVLM</td>
|
||||
<td align="right">17.4B</td>
|
||||
<td>1438</td>
|
||||
<td>63.7 </td>
|
||||
<td>32.1 </td>
|
||||
<td>2.68 / 52.1 </td>
|
||||
<td>73.6 / 87.4 </td>
|
||||
<td>68.8 </td>
|
||||
<td>34.7 </td>
|
||||
<td>73.9 </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="left" >LLaVA 1.5</td>
|
||||
<td align="right">13.6B </td>
|
||||
<td>1531 </td>
|
||||
<td>68.2 </td>
|
||||
<td>36.4 </td>
|
||||
<td>2.71 / 51.0 </td>
|
||||
<td>53.7 / 77.4 </td>
|
||||
<td>68.1 </td>
|
||||
<td>26.4 </td>
|
||||
<td>64.6 </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td nowrap="nowrap" align="left" ><b>OmniLMM-12B</b></td>
|
||||
<td align="right">11.6B </td>
|
||||
<td>1637 </td>
|
||||
<td>71.6 </td>
|
||||
<td>40.7 </td>
|
||||
<td>3.45 / 68.8 </td>
|
||||
<td>90.3 / 95.5 </td>
|
||||
<td>71.1 </td>
|
||||
<td>34.9 </td>
|
||||
<td>72.0 </td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
<small>†: Proprietary models</small>
|
||||
|
||||
### Examples
|
||||
|
||||
<table align="center" >
|
||||
<p align="center" >
|
||||
<img src="assets/omnilmm-12b-examples_2.png" />
|
||||
</p>
|
||||
</table>
|
||||
|
||||
|
||||
We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. Video frames are described in text using OmniLMM-12B, and ChatGPT 3.5 (text-only) is employed to generate response according to the descriptions and user prompts. The demo video is a raw recording without edition.
|
||||
|
||||
<div align="center" >
|
||||
<video controls src="https://github.com/OpenBMB/OmniLMM/assets/157115220/c1fd3562-1ab1-4534-8139-79e9137b5398" type="video/mp4" width=80%/>
|
||||
</div>
|
||||
|
||||
## OmniLMM-3B
|
||||
**OmniLMM-3B** (i.e., MiniCPM-V) is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of OmniLMM-3B include:
|
||||
|
||||
- ⚡️ **High Efficiency.**
|
||||
|
||||
OmniLMM-3B can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows OmniLMM-3B to operate with **much less memory cost and higher speed during inference**.
|
||||
|
||||
- 🔥 **Promising Performance.**
|
||||
|
||||
OmniLMM-3B achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**.
|
||||
|
||||
- 🙌 **Bilingual Support.**
|
||||
|
||||
OmniLMM-3B is **the first edge-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from our ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038).
|
||||
|
||||
|
||||
### Evaluation
|
||||
<div align="center">
|
||||
@@ -304,6 +168,110 @@ We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal
|
||||
<small>†: Proprietary models</small>
|
||||
</details>
|
||||
|
||||
### Examples
|
||||
|
||||
<table align="center" >
|
||||
<p align="center" >
|
||||
<img src="assets/omnilmm-12b-examples_2.png" />
|
||||
</p>
|
||||
</table>
|
||||
|
||||
|
||||
We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. Video frames are described in text using OmniLMM-12B, and ChatGPT 3.5 (text-only) is employed to generate response according to the descriptions and user prompts. The demo video is a raw recording without edition.
|
||||
|
||||
<div align="center" >
|
||||
<video controls src="https://github.com/OpenBMB/OmniLMM/assets/157115220/c1fd3562-1ab1-4534-8139-79e9137b5398" type="video/mp4" width=80%/>
|
||||
</div>
|
||||
|
||||
## OmniLMM-3B
|
||||
**OmniLMM-3B** (i.e., MiniCPM-V) is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of OmniLMM-3B include:
|
||||
|
||||
- ⚡️ **High Efficiency.**
|
||||
|
||||
OmniLMM-3B can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows OmniLMM-3B to operate with **much less memory cost and higher speed during inference**.
|
||||
|
||||
- 🔥 **Promising Performance.**
|
||||
|
||||
OmniLMM-3B achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**.
|
||||
|
||||
- 🙌 **Bilingual Support.**
|
||||
|
||||
OmniLMM-3B is **the first edge-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from our ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038).
|
||||
|
||||
### Evaluation
|
||||
|
||||
<div align="center">
|
||||
|
||||
<table style="margin: 0px auto;">
|
||||
<thead>
|
||||
<tr>
|
||||
<th align="left">Model</th>
|
||||
<th>Size</th>
|
||||
<th>MME</th>
|
||||
<th nowrap="nowrap" >MMB dev (en)</th>
|
||||
<th nowrap="nowrap" >MMB dev (zh)</th>
|
||||
<th nowrap="nowrap" >MMMU val</th>
|
||||
<th nowrap="nowrap" >CMMMU val</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody align="center">
|
||||
<tr>
|
||||
<td align="left">LLaVA-Phi</td>
|
||||
<td align="right">3B</td>
|
||||
<td>1335</td>
|
||||
<td>59.8</td>
|
||||
<td>- </td>
|
||||
<td>- </td>
|
||||
<td>- </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td nowrap="nowrap" align="left">MobileVLM</td>
|
||||
<td align="right">3B</td>
|
||||
<td>1289</td>
|
||||
<td>59.6</td>
|
||||
<td>- </td>
|
||||
<td>- </td>
|
||||
<td>- </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td nowrap="nowrap" align="left" >Imp-v1</td>
|
||||
<td align="right">3B</td>
|
||||
<td>1434</td>
|
||||
<td>66.5</td>
|
||||
<td>- </td>
|
||||
<td>- </td>
|
||||
<td>- </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td align="left" >Qwen-VL-Chat</td>
|
||||
<td align="right" >9.6B</td>
|
||||
<td>1487</td>
|
||||
<td>60.6 </td>
|
||||
<td>56.7 </td>
|
||||
<td>35.9 </td>
|
||||
<td>30.7 </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td nowrap="nowrap" align="left" >CogVLM</td>
|
||||
<td align="right">17.4B </td>
|
||||
<td>1438 </td>
|
||||
<td>63.7 </td>
|
||||
<td>53.8 </td>
|
||||
<td>32.1 </td>
|
||||
<td>- </td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td nowrap="nowrap" align="left" ><b>OmniLMM-3B</b></td>
|
||||
<td align="right">3B </td>
|
||||
<td>1452 </td>
|
||||
<td>67.3 </td>
|
||||
<td>61.9 </td>
|
||||
<td>34.7 </td>
|
||||
<td>32.1 </td>
|
||||
</tr>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
</div>
|
||||
|
||||
### Examples
|
||||
|
||||
Reference in New Issue
Block a user