mirror of
https://github.com/OpenBMB/MiniCPM-V.git
synced 2026-02-05 02:09:20 +08:00
965 lines
31 KiB
Markdown
965 lines
31 KiB
Markdown
## MiniCPM-o 2.6
|
|
|
|
> Archieve at: 2026-02-02
|
|
|
|
**MiniCPM-o 2.6** is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming. Notable features of MiniCPM-o 2.6 include:
|
|
|
|
- 🔥 **Leading Visual Capability.**
|
|
MiniCPM-o 2.6 achieves an average score of 70.2 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-202405, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. It also **outperforms GPT-4V and Claude 3.5 Sonnet** in multi-image and video understanding, and shows promising in-context learning capability.
|
|
|
|
- 🎙 **State-of-the-art Speech Capability.** MiniCPM-o 2.6 supports **bilingual real-time speech conversation with configurable voices** in English and Chinese. It **outperforms GPT-4o-realtime on audio understanding tasks** such as ASR and STT translation, and shows **state-of-the-art performance on speech conversation in both semantic and acoustic evaluations in the open-source community**. It also allows for fun features such as emotion/speed/style control, end-to-end voice cloning, role play, etc.
|
|
|
|
- 🎬 **Strong Multimodal Live Streaming Capability.** As a new feature, MiniCPM-o 2.6 can **accept continuous video and audio streams independent of user queries, and support real-time speech interaction**. It **outperforms GPT-4o-202408 and Claude 3.5 Sonnet and shows state-of-the-art performance in the open-source community on StreamingBench**, a comprehensive benchmark for real-time video understanding, omni-source (video & audio) understanding, and multimodal contextual understanding.
|
|
|
|
- 💪 **Strong OCR Capability and Others.**
|
|
Advancing popular visual capabilities from MiniCPM-V series, MiniCPM-o 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench for models under 25B, surpassing proprietary models such as GPT-4o-202405**.
|
|
Based on the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o and Claude 3.5 Sonnet on MMHal-Bench, and supports **multilingual capabilities** on more than 30 languages.
|
|
|
|
|
|
- 🚀 **Superior Efficiency.**
|
|
In addition to its friendly size, MiniCPM-o 2.6 also shows **state-of-the-art token density** (i.e., the number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-o 2.6 can efficiently support **multimodal live streaming** on end-side devices such as iPads.
|
|
|
|
- 💫 **Easy Usage.**
|
|
MiniCPM-o 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [LLaMA-Factory](./docs/llamafactory_train_and_infer.md), (5) quick [local WebUI demo](#chat-with-our-demo-on-gradio), and (6) online web demo on [server](https://minicpm-omni-webdemo-us.modelbest.cn/).
|
|
|
|
**Model Architecture.**
|
|
|
|
- **End-to-end Omni-modal Architecture.** Different modality encoders/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge. The model is trained in a fully end-to-end manner with only CE loss.
|
|
- **Omni-modal Live Streaming Mechanism.** (1) We change the offline modality encoder/decoders into online ones for **streaming inputs/outputs.** (2) We devise a **time-division multiplexing (TDM) mechanism** for omni-modality streaming processing in the LLM backbone. It divides parallel omni-modality streams into sequential info within small periodic time slices.
|
|
- **Configurable Speech Modeling Design.** We devise a multimodal system prompt, including traditional text system prompt, and **a new audio system prompt to determine the assistant voice**. This enables flexible voice configurations in inference time, and also facilitates end-to-end voice cloning and description-based voice creation.
|
|
|
|
<div align="center">
|
|
<img src="./assets/minicpm-o-26-framework-v2.png" , width=80%>
|
|
</div>
|
|
|
|
|
|
### Evaluation <!-- omit in toc -->
|
|
|
|
<div align="center">
|
|
<img src="./assets/radar.jpg", width=80%>
|
|
</div>
|
|
|
|
<details>
|
|
<summary>Click to view visual understanding results.</summary>
|
|
|
|
**Image Understanding**
|
|
|
|
<div align="center">
|
|
<table style="margin: 0px auto;">
|
|
<thead>
|
|
<tr>
|
|
<th align="left">Model</th>
|
|
<th>Size</th>
|
|
<th>Token Density<sup>+</sup></th>
|
|
<th>OpenCompass</th>
|
|
<th>OCRBench</th>
|
|
<th>MathVista mini</th>
|
|
<th>ChartQA</th>
|
|
<th>MMVet</th>
|
|
<th>MMStar</th>
|
|
<th>MME</th>
|
|
<th>MMB1.1 test</th>
|
|
<th>AI2D</th>
|
|
<th>MMMU val</th>
|
|
<th>HallusionBench</th>
|
|
<th>TextVQA val</th>
|
|
<th>DocVQA test</th>
|
|
<th>MathVerse mini</th>
|
|
<th>MathVision</th>
|
|
<th>MMHal Score</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody align="center">
|
|
<tr>
|
|
<td colspan="19" align="left"><strong>Proprietary</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GPT-4o-20240513</td>
|
|
<td>-</td>
|
|
<td>1088</td>
|
|
<td><u>69.9</u></td>
|
|
<td>736</td>
|
|
<td>61.3</td>
|
|
<td>85.7</td>
|
|
<td><strong>69.1</strong></td>
|
|
<td>63.9</td>
|
|
<td>2328.7</td>
|
|
<td>82.2</td>
|
|
<td>84.6</td>
|
|
<td><strong>69.2</strong></td>
|
|
<td><strong>55.0</strong></td>
|
|
<td>-</td>
|
|
<td>92.8</td>
|
|
<td><strong>50.2</strong></td>
|
|
<td><strong>30.4</strong></td>
|
|
<td><u>3.6</u></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Claude3.5-Sonnet</td>
|
|
<td>-</td>
|
|
<td>750</td>
|
|
<td>67.9</td>
|
|
<td>788</td>
|
|
<td>61.6</td>
|
|
<td><strong>90.8</strong></td>
|
|
<td>66.0</td>
|
|
<td>62.2</td>
|
|
<td>1920.0</td>
|
|
<td>78.5</td>
|
|
<td>80.2</td>
|
|
<td><u>65.9</u></td>
|
|
<td>49.9</td>
|
|
<td>-</td>
|
|
<td><strong>95.2</strong></td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>3.4</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>64.4</td>
|
|
<td>754</td>
|
|
<td>57.7</td>
|
|
<td>81.3</td>
|
|
<td>64.0</td>
|
|
<td>59.1</td>
|
|
<td>2110.6</td>
|
|
<td>73.9</td>
|
|
<td>79.1</td>
|
|
<td>60.6</td>
|
|
<td>45.6</td>
|
|
<td>73.5</td>
|
|
<td>86.5</td>
|
|
<td>-</td>
|
|
<td>19.2</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GPT-4o-mini-20240718</td>
|
|
<td>-</td>
|
|
<td>1088</td>
|
|
<td>64.1</td>
|
|
<td>785</td>
|
|
<td>52.4</td>
|
|
<td>-</td>
|
|
<td>66.9</td>
|
|
<td>54.8</td>
|
|
<td>2003.4</td>
|
|
<td>76.0</td>
|
|
<td>77.8</td>
|
|
<td>60.0</td>
|
|
<td>46.1</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>3.3</td>
|
|
</tr>
|
|
<tr>
|
|
<td colspan="19" align="left"><strong>Open Source</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Cambrian-34B</td>
|
|
<td>34B</td>
|
|
<td><u>1820</u></td>
|
|
<td>58.3</td>
|
|
<td>591</td>
|
|
<td>50.3</td>
|
|
<td>75.6</td>
|
|
<td>53.2</td>
|
|
<td>54.2</td>
|
|
<td>2049.9</td>
|
|
<td>77.8</td>
|
|
<td>79.5</td>
|
|
<td>50.4</td>
|
|
<td>41.6</td>
|
|
<td>76.7</td>
|
|
<td>75.5</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GLM-4V-9B</td>
|
|
<td>13B</td>
|
|
<td>784</td>
|
|
<td>59.1</td>
|
|
<td>776</td>
|
|
<td>51.1</td>
|
|
<td>-</td>
|
|
<td>58.0</td>
|
|
<td>54.8</td>
|
|
<td>2018.8</td>
|
|
<td>67.9</td>
|
|
<td>71.2</td>
|
|
<td>46.9</td>
|
|
<td>45.0</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Pixtral-12B</td>
|
|
<td>12B</td>
|
|
<td>256</td>
|
|
<td>61.0</td>
|
|
<td>685</td>
|
|
<td>56.9</td>
|
|
<td>81.8</td>
|
|
<td>58.5</td>
|
|
<td>54.5</td>
|
|
<td>-</td>
|
|
<td>72.7</td>
|
|
<td>79.0</td>
|
|
<td>51.1</td>
|
|
<td>47.0</td>
|
|
<td>75.7</td>
|
|
<td>90.7</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">VITA-1.5</td>
|
|
<td>8B</td>
|
|
<td>784</td>
|
|
<td>63.3</td>
|
|
<td>741</td>
|
|
<td>66.2</td>
|
|
<td>-</td>
|
|
<td>52.7</td>
|
|
<td>60.2</td>
|
|
<td>2328.1</td>
|
|
<td>76.8</td>
|
|
<td>79.2</td>
|
|
<td>52.6</td>
|
|
<td>44.6</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">DeepSeek-VL2-27B (4B)</td>
|
|
<td>27B</td>
|
|
<td>672</td>
|
|
<td>66.4</td>
|
|
<td>809</td>
|
|
<td>63.9</td>
|
|
<td>86.0</td>
|
|
<td>60.0</td>
|
|
<td>61.9</td>
|
|
<td>2253.0</td>
|
|
<td>81.2</td>
|
|
<td>83.8</td>
|
|
<td>54.0</td>
|
|
<td>45.3</td>
|
|
<td><u>84.2</u></td>
|
|
<td>93.3</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>3.0</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
|
|
<td>8B</td>
|
|
<td>784</td>
|
|
<td>67.1</td>
|
|
<td><u>866</u></td>
|
|
<td>58.2</td>
|
|
<td>83.0</td>
|
|
<td>62.0</td>
|
|
<td>60.7</td>
|
|
<td>2326.0</td>
|
|
<td>81.8</td>
|
|
<td>83.0</td>
|
|
<td>54.1</td>
|
|
<td>50.6</td>
|
|
<td><strong>84.3</strong></td>
|
|
<td><u>94.5</u></td>
|
|
<td>31.9</td>
|
|
<td>16.3</td>
|
|
<td>3.2</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">LLaVA-OneVision-72B</td>
|
|
<td>72B</td>
|
|
<td>182</td>
|
|
<td>68.1</td>
|
|
<td>741</td>
|
|
<td>67.5</td>
|
|
<td>83.7</td>
|
|
<td>60.6</td>
|
|
<td><strong>65.8</strong></td>
|
|
<td>2261.0</td>
|
|
<td><strong>85.0</strong></td>
|
|
<td><u>85.6</u></td>
|
|
<td>56.8</td>
|
|
<td>49.0</td>
|
|
<td>80.5</td>
|
|
<td>91.3</td>
|
|
<td>39.1</td>
|
|
<td>-</td>
|
|
<td>3.5</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">InternVL2.5-8B</td>
|
|
<td>8B</td>
|
|
<td>706</td>
|
|
<td>68.3</td>
|
|
<td>822</td>
|
|
<td><u>64.4</u></td>
|
|
<td>84.8</td>
|
|
<td>62.8</td>
|
|
<td>62.8</td>
|
|
<td>2344.0</td>
|
|
<td><u>83.6</u></td>
|
|
<td>84.5</td>
|
|
<td>56.0</td>
|
|
<td>50.1</td>
|
|
<td>79.1</td>
|
|
<td>93.0</td>
|
|
<td>39.5</td>
|
|
<td>19.7</td>
|
|
<td>3.4</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
|
|
<td>8B</td>
|
|
<td><strong>2822</strong></td>
|
|
<td>65.2</td>
|
|
<td>852*</td>
|
|
<td>60.6</td>
|
|
<td>79.4</td>
|
|
<td>60.0</td>
|
|
<td>57.5</td>
|
|
<td><u>2348.4*</u></td>
|
|
<td>78.0</td>
|
|
<td>82.1</td>
|
|
<td>49.8*</td>
|
|
<td>48.1*</td>
|
|
<td>80.1</td>
|
|
<td>90.8</td>
|
|
<td>25.7</td>
|
|
<td>18.3</td>
|
|
<td>3.6</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
|
|
<td>8B</td>
|
|
<td><strong>2822</strong></td>
|
|
<td><strong>70.2</strong></td>
|
|
<td><strong>897*</strong></td>
|
|
<td><strong>71.9*</strong></td>
|
|
<td><u>86.9*</u></td>
|
|
<td><u>67.5</u></td>
|
|
<td><u>64.0</u></td>
|
|
<td><strong>2372.0*</strong></td>
|
|
<td>80.5</td>
|
|
<td><strong>85.8</strong></td>
|
|
<td>50.4*</td>
|
|
<td><u>51.9</u></td>
|
|
<td>82.0</td>
|
|
<td>93.5</td>
|
|
<td><u>41.4*</u></td>
|
|
<td><u>23.1*</u></td>
|
|
<td><strong>3.8</strong></td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
</div>
|
|
* We evaluate this benchmark using chain-of-thought prompting. Specifically, for MME, we used this technique only for the Cognition set.
|
|
|
|
|
|
<sup>+</sup> Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens.
|
|
|
|
Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation.
|
|
|
|
|
|
**Multi-image and Video Understanding**
|
|
|
|
<div align="center">
|
|
|
|
<table style="margin: 0px auto;">
|
|
<thead>
|
|
<tr>
|
|
<th align="left">Model</th>
|
|
<th>Size</th>
|
|
<th>BLINK val</th>
|
|
<th>Mantis Eval</th>
|
|
<th>MIRB</th>
|
|
<th>Video-MME (wo / w subs)</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody align="center">
|
|
<tr>
|
|
<td colspan="6" align="left"><strong>Proprietary</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GPT-4o-20240513</td>
|
|
<td>-</td>
|
|
<td><strong>68.0</strong></td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td><strong>71.9/77.2<strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GPT4V</td>
|
|
<td>-</td>
|
|
<td>54.6</td>
|
|
<td>62.7</td>
|
|
<td>53.1</td>
|
|
<td>59.9/63.3</td>
|
|
</tr>
|
|
<tr>
|
|
<td colspan="6" align="left"><strong>Open-source</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">VITA-1.5</td>
|
|
<td>8B</td>
|
|
<td>45.0</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>56.1/58.7</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">LLaVA-NeXT-Interleave 14B</td>
|
|
<td>14B</td>
|
|
<td>52.6</td>
|
|
<td>66.4</td>
|
|
<td>30.2</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">LLaVA-OneVision-72B</td>
|
|
<td>72B</td>
|
|
<td>55.4</td>
|
|
<td><strong>77.6</strong></td>
|
|
<td>-</td>
|
|
<td><u>66.2/69.5</u></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MANTIS 8B</td>
|
|
<td>8B</td>
|
|
<td>49.1</td>
|
|
<td>59.5</td>
|
|
<td>34.8</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
|
|
<td>8B</td>
|
|
<td>53.2</td>
|
|
<td>69.6*</td>
|
|
<td><strong>67.6*</strong></td>
|
|
<td>63.3/69.0</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">InternVL2.5-8B</td>
|
|
<td>8B</td>
|
|
<td>54.8</td>
|
|
<td>67.7</td>
|
|
<td>52.5</td>
|
|
<td>64.2/66.9</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
|
|
<td>8B</td>
|
|
<td>53.0</td>
|
|
<td>69.1</td>
|
|
<td>53.8</td>
|
|
<td>60.9/63.6</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
|
|
<td>8B</td>
|
|
<td><u>56.7</u></td>
|
|
<td><u>71.9</u></td>
|
|
<td><u>58.6</u></td>
|
|
<td>63.9/67.9</td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
|
|
</div>
|
|
* We evaluate officially released checkpoints by ourselves.
|
|
|
|
</details>
|
|
|
|
|
|
<details>
|
|
<summary>Click to view audio understanding and speech conversation results.</summary>
|
|
|
|
**Audio Understanding**
|
|
|
|
<div align="center">
|
|
<table style="margin: 0px auto;">
|
|
<thead>
|
|
<tr>
|
|
<th align="left">Task</th>
|
|
<th>Size</th>
|
|
<th colspan="3">ASR (zh)</th>
|
|
<th colspan="3">ASR (en)</th>
|
|
<th colspan="2">AST</th>
|
|
<th>Emotion</th>
|
|
</tr>
|
|
<tr>
|
|
<th align="left">Metric</th>
|
|
<td></td>
|
|
<th colspan="3">CER↓</th>
|
|
<th colspan="3">WER↓</th>
|
|
<th colspan="2">BLEU↑</th>
|
|
<th>ACC↑</th>
|
|
</tr>
|
|
<tr>
|
|
<th align="left">Dataset</th>
|
|
<td></td>
|
|
<th>AISHELL-1</th>
|
|
<th>Fleurs zh</th>
|
|
<th>WenetSpeech test-net</th>
|
|
<th>LibriSpeech test-clean</th>
|
|
<th>GigaSpeech</th>
|
|
<th>TED-LIUM</th>
|
|
<th>CoVoST en2zh</th>
|
|
<th>CoVoST zh2en</th>
|
|
<th>MELD emotion</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody align="center">
|
|
<tr>
|
|
<td colspan="11" align="left"><strong>Proprietary</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GPT-4o-Realtime</td>
|
|
<td>-</td>
|
|
<td>7.3*</td>
|
|
<td><u>5.4*</u></td>
|
|
<td>28.9*</td>
|
|
<td>2.6*</td>
|
|
<td>12.9*</td>
|
|
<td>4.8*</td>
|
|
<td>37.1*</td>
|
|
<td>15.7*</td>
|
|
<td>33.2*</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
|
|
<td>-</td>
|
|
<td>4.5*</td>
|
|
<td>5.9*</td>
|
|
<td>14.3*</td>
|
|
<td>2.9*</td>
|
|
<td>10.6*</td>
|
|
<td><strong>3.0*</strong></td>
|
|
<td><u>47.3*</u></td>
|
|
<td>22.6*</td>
|
|
<td>48.4*</td>
|
|
</tr>
|
|
<tr>
|
|
<td colspan="11" align="left"><strong>Open-Source</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Qwen2-Audio-7B</td>
|
|
<td>8B</td>
|
|
<td>-</td>
|
|
<td>7.5</td>
|
|
<td>-</td>
|
|
<td><strong>1.6</strong></td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>45.2</td>
|
|
<td><u>24.4</u></td>
|
|
<td><strong>55.3</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Qwen2-Audio-7B-Instruct</td>
|
|
<td>8B</td>
|
|
<td>2.6*</td>
|
|
<td>6.9*</td>
|
|
<td><u>10.3*</u></td>
|
|
<td>3.1*</td>
|
|
<td><u>9.7</u>*</td>
|
|
<td>5.9*</td>
|
|
<td>39.5*</td>
|
|
<td>22.9*</td>
|
|
<td>17.4*</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">VITA-1.5</td>
|
|
<td>8B</td>
|
|
<td>2.16</td>
|
|
<td>-</td>
|
|
<td>8.4</td>
|
|
<td>3.4</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GLM-4-Voice-Base</td>
|
|
<td>9B</td>
|
|
<td><u>2.5</u></td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>2.8</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
|
|
<td>8B</td>
|
|
<td><strong>1.6</strong></td>
|
|
<td><strong>4.4</strong></td>
|
|
<td><strong>6.9</strong></td>
|
|
<td><u>1.7</u></td>
|
|
<td><strong>8.7</strong></td>
|
|
<td><strong>3.0</strong></td>
|
|
<td><strong>48.2</strong></td>
|
|
<td><strong>27.2</strong></td>
|
|
<td><u>52.4</u></td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
</div>
|
|
* We evaluate officially released checkpoints by ourselves.<br><br>
|
|
|
|
**Speech Generation**
|
|
|
|
<div align="center">
|
|
<table style="margin: 0px auto;">
|
|
<thead>
|
|
<tr>
|
|
<th align="left">Task</th>
|
|
<th>Size</th>
|
|
<th colspan="9">SpeechQA</th>
|
|
</tr>
|
|
<tr>
|
|
<th align="left">Metric</th>
|
|
<th></th>
|
|
<th colspan="3">ACC↑</th>
|
|
<th>G-Eval (10 point)↑</th>
|
|
<th>Semantic ELO score↑</th>
|
|
<th>Acoustic ELO score↑</th>
|
|
<th>Overall ELO score↑</th>
|
|
<th>UTMOS↑</th>
|
|
<th>ASR-WER↓</th>
|
|
</tr>
|
|
<tr>
|
|
<th align="left">Dataset</th>
|
|
<th></th>
|
|
<th>Speech Llama Q.</th>
|
|
<th>Speech Web Q.</th>
|
|
<th>Speech Trivia QA</th>
|
|
<th>Speech AlpacaEval</th>
|
|
<th colspan="5">AudioArena</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody align="center">
|
|
<tr>
|
|
<td colspan="11" align="left"><strong>Proprietary</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GPT-4o-Realtime</td>
|
|
<td></td>
|
|
<td><strong>71.7</strong></td>
|
|
<td><strong>51.6</strong></td>
|
|
<td><strong>69.7</strong></td>
|
|
<td><strong>7.4</strong></td>
|
|
<td><strong>1157</strong></td>
|
|
<td><strong>1203</strong></td>
|
|
<td><strong>1200</strong></td>
|
|
<td><strong>4.2</strong></td>
|
|
<td><strong>2.3</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td colspan="11" align="left"><strong>Open-Source</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GLM-4-Voice</td>
|
|
<td>9B</td>
|
|
<td>50.0</td>
|
|
<td>32.0</td>
|
|
<td>36.4</td>
|
|
<td><u>5.1</u></td>
|
|
<td>999</td>
|
|
<td>1147</td>
|
|
<td>1035</td>
|
|
<td><u>4.1</u></td>
|
|
<td><u>11.7</u></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Llama-Omni</td>
|
|
<td>8B</td>
|
|
<td>45.3</td>
|
|
<td>22.9</td>
|
|
<td>10.7</td>
|
|
<td>3.9</td>
|
|
<td>960</td>
|
|
<td>878</td>
|
|
<td>897</td>
|
|
<td>3.2</td>
|
|
<td>24.3</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">VITA-1.5</td>
|
|
<td>8B</td>
|
|
<td>46.7</td>
|
|
<td>28.1</td>
|
|
<td>23.3</td>
|
|
<td>2.0</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
<td>-</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Moshi</td>
|
|
<td>7B</td>
|
|
<td>43.7</td>
|
|
<td>23.8</td>
|
|
<td>16.7</td>
|
|
<td>2.4</td>
|
|
<td>871</td>
|
|
<td>808</td>
|
|
<td>875</td>
|
|
<td>2.8</td>
|
|
<td>8.2</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Mini-Omni</td>
|
|
<td>1B</td>
|
|
<td>22.0</td>
|
|
<td>12.8</td>
|
|
<td>6.9</td>
|
|
<td>2.5</td>
|
|
<td>926</td>
|
|
<td>803</td>
|
|
<td>865</td>
|
|
<td>3.4</td>
|
|
<td>10.0</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
|
|
<td>8B</td>
|
|
<td><u>61.0</u></td>
|
|
<td><u>40.0</u></td>
|
|
<td><u>40.2</u></td>
|
|
<td><u>5.1</u></td>
|
|
<td><u>1088</u></td>
|
|
<td><u>1163</u></td>
|
|
<td><u>1131</u></td>
|
|
<td><strong>4.2</strong></td>
|
|
<td>9.8</td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
</div>
|
|
All results are from AudioEvals, and the evaluation methods along with further details can be found in <a href="https://github.com/OpenBMB/UltraEval-Audio" target="_blank">AudioEvals</a>.<br><br>
|
|
|
|
**End-to-end Voice Cloning**
|
|
|
|
<div align="center">
|
|
<table style="margin: 0px auto;">
|
|
<thead>
|
|
<tr>
|
|
<th align="left">Task</th>
|
|
<th colspan="2">Voice cloning</th>
|
|
</tr>
|
|
<tr>
|
|
<th align="left">Metric</th>
|
|
<th>SIMO↑</th>
|
|
<th>SIMO↑</th>
|
|
</tr>
|
|
<tr>
|
|
<th align="left">Dataset</th>
|
|
<th>Seed-TTS test-zh</th>
|
|
<th>Seed-TTS test-en</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody align="center">
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">F5-TTS</td>
|
|
<td><strong>76</strong></td>
|
|
<td><strong>67</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">CosyVoice</td>
|
|
<td><u>75</u></td>
|
|
<td><u>64</u></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">FireRedTTS</td>
|
|
<td>63</td>
|
|
<td>46</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
|
|
<td>57</td>
|
|
<td>47</td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
</div>
|
|
|
|
</details>
|
|
|
|
<details>
|
|
<summary>Click to view multimodal live streaming results.</summary>
|
|
|
|
**Multimodal Live Streaming**: results on StreamingBench
|
|
|
|
<table style="margin: 0px auto;">
|
|
<thead>
|
|
<tr>
|
|
<th align="left">Model</th>
|
|
<th>Size</th>
|
|
<th>Real-Time Video Understanding</th>
|
|
<th>Omni-Source Understanding</th>
|
|
<th>Contextual Understanding</th>
|
|
<th>Overall</th>
|
|
</tr>
|
|
</thead>
|
|
<tbody align="center">
|
|
<tr>
|
|
<td colspan="7" align="left"><strong>Proprietary</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Gemini 1.5 Pro</td>
|
|
<td>-</td>
|
|
<td><u>77.4</u></td>
|
|
<td><strong>67.8</strong></td>
|
|
<td><strong>51.1</strong></td>
|
|
<td><strong>70.3</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">GPT-4o-202408</td>
|
|
<td>-</td>
|
|
<td>74.5</td>
|
|
<td>51.0</td>
|
|
<td><u>48.0</u></td>
|
|
<td>64.1</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Claude-3.5-Sonnet</td>
|
|
<td>-</td>
|
|
<td>74.0</td>
|
|
<td>41.4</td>
|
|
<td>37.8</td>
|
|
<td>59.7</td>
|
|
</tr>
|
|
<tr>
|
|
<td colspan="9" align="left"><strong>Open-source</strong></td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">VILA-1.5</td>
|
|
<td>8B</td>
|
|
<td>61.5</td>
|
|
<td>37.5</td>
|
|
<td>26.7</td>
|
|
<td>49.5</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">LongVA</td>
|
|
<td>7B</td>
|
|
<td>63.1</td>
|
|
<td>35.9</td>
|
|
<td>30.2</td>
|
|
<td>50.7</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">LLaVA-Next-Video-34B</td>
|
|
<td>34B</td>
|
|
<td>69.8</td>
|
|
<td>41.7</td>
|
|
<td>34.3</td>
|
|
<td>56.7</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">Qwen2-VL-7B</td>
|
|
<td>8B</td>
|
|
<td>71.2</td>
|
|
<td>40.7</td>
|
|
<td>33.1</td>
|
|
<td>57.0</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">InternVL2-8B</td>
|
|
<td>8B</td>
|
|
<td>70.1</td>
|
|
<td>42.7</td>
|
|
<td>34.1</td>
|
|
<td>57.0</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">VITA-1.5</td>
|
|
<td>8B</td>
|
|
<td>70.9</td>
|
|
<td>40.8</td>
|
|
<td>35.8</td>
|
|
<td>57.4</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">LLaVA-OneVision-7B</td>
|
|
<td>8B</td>
|
|
<td>74.3</td>
|
|
<td>40.8</td>
|
|
<td>31.0</td>
|
|
<td>58.4</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">InternLM-XC2.5-OL-7B</td>
|
|
<td>8B</td>
|
|
<td>75.4</td>
|
|
<td>46.2</td>
|
|
<td>33.6</td>
|
|
<td>60.8</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-V 2.6</td>
|
|
<td>8B</td>
|
|
<td>72.4</td>
|
|
<td>40.2</td>
|
|
<td>33.4</td>
|
|
<td>57.7</td>
|
|
</tr>
|
|
<tr>
|
|
<td nowrap="nowrap" align="left">MiniCPM-o 2.6</td>
|
|
<td>8B</td>
|
|
<td><strong>79.9</strong></td>
|
|
<td><u>53.4</u></td>
|
|
<td>38.5</td>
|
|
<td><u>66.0</u></td>
|
|
</tr>
|
|
</tbody>
|
|
</table>
|
|
|
|
</details>
|
|
|
|
|
|
### Examples <!-- omit in toc -->
|
|
|
|
We deploy MiniCPM-o 2.6 on end devices. The demo video is the raw-speed recording on an iPad Pro and a Web demo.
|
|
|
|
<div align="center">
|
|
<a href="https://www.youtube.com/watch?v=vRIMbxJzStY&t=2s"><img src="./assets/minicpmo2_6/2dot6_o_demo_video_img.png", width=70%></a>
|
|
</div>
|
|
|
|
<br>
|
|
|
|
<div style="display: flex; flex-direction: column; align-items: center;">
|
|
<img src="assets/minicpmo2_6/minicpmo2_6_math_intersect.png" alt="math" style="margin-bottom: 5px;">
|
|
<img src="assets/minicpmo2_6/minicpmo2_6_diagram_train_NN.png" alt="diagram" style="margin-bottom: 5px;">
|
|
<img src="assets/minicpmo2_6/minicpmo2_6_multi-image_bike.png" alt="bike" style="margin-bottom: 5px;">
|
|
</div>
|
|
|