diff --git a/README.md b/README.md index 0f5a21c..ed79c0b 100644 --- a/README.md +++ b/README.md @@ -2,14 +2,14 @@ -**A GPT-4o Level MLLM for Single Image, Multi Image and High-FPS Video Understanding on Your Phone** +**A Gemini 2.5 Flash Level MLLM for Vision, Speech, and Full-Duplex Mulitmodal Live Streaminig on Your Phone** [中文](./README_zh.md) | English - +

- MiniCPM-V 4.5 🤗 🤖 | MiniCPM-o 2.6 🤗 🤖 | 🍳 Cookbook | - 📄 Technical Report + MiniCPM-o 4.5 🤗 🤖 | MiniCPM-V 4.0 🤗 | 🍳 Cookbook | + 📄 MiniCPM-V 4.5 Technical Report

-**MiniCPM-V** is a series of efficient end-side multimodal LLMs (MLLMs), which accept images, videos and text as inputs and deliver high-quality text outputs. **MiniCPM-o** additionally takes audio as inputs and provides high-quality speech outputs in an end-to-end fashion. Since February 2024, we have released 7 versions of the model, aiming to achieve **strong performance and efficient deployment**. The most notable models in the series currently include: +**MiniCPM-o** is the latest series of on-device multimodal LLMs (MLLMs) ungraded from MiniCPM-V. The models can now take image, video, text, and audio as inputs and provide high-quality text and speech outputs in an end-to-end fashion. The model series is designed for **strong performance and efficient deployment**. The most notable models in the series currently include: -- **MiniCPM-V 4.5**: 🔥🔥🔥 The latest and most capable model in the MiniCPM-V series. With a total of 8B parameters, this model **outperforms GPT-4o-latest, Gemini-2.0 Pro, and Qwen2.5-VL 72B** in vision-language capabilities, making it the most performant on-device multimodal model in the open-source community. This version brings **new features including efficient high-FPS and long video understanding (up to 96x compression rate for video tokens), controllable hybrid fast/deep thinking, strong handwritten OCR and complex table/document parsing**. It also advances MiniCPM-V's popular features such as trustworthy behavior, multilingual support and end-side deployability. +- **MiniCPM-o 4.5**: 🔥🔥🔥 The latest and most capable model in the series. With a total of 9B parameters, this end-to-end model **approaches Gemini 2.5 Flash in vision, speech, and full-duplex multimodal live streaming**, making it one of the most versatile and performant models in the open-source community. The new full-duplex multimodal live streaming capability means that the output streams (speech and text), and the real-time input streams (video and audio) do not block each other. This **enables MiniCPM-o 4.5 to see, listen, and speak simultaneously** in a real-time omnimodal conversation, and perform **proactive interactions** such as proactive reminding. The improved voice mode supports bilingual real-time speech conversation in a more natural, expressive, and stable way, and also allows for voice cloning. It also advances MiniCPM-V's visual capabilities such as strong OCR capability, trustworthy behavior and multilingual support, etc. We also rollout a **high-performing llama.cpp-omni inference framework together with a WebRTC Demo**, to bring this full-duplex multimodal live streaming experience available on local devices such as PCs. -- **MiniCPM-o 2.6**: ⭐️⭐️⭐️ The most capable model in the MiniCPM-o series. With a total of 8B parameters, this end-to-end model **achieves comparable performance to GPT-4o-202405 in vision, speech, and multimodal live streaming**, making it one of the most versatile and performant models in the open-source community. For the new voice mode, MiniCPM-o 2.6 **supports bilingual real-time speech conversation with configurable voices**, and also allows for fun capabilities such as emotion/speed/style control, end-to-end voice cloning, role play, etc. Due to its superior token density, MiniCPM-o 2.6 can for the first time **support multimodal live streaming on end-side devices** such as iPad. +- **MiniCPM-V 4.0**: ⭐️⭐️⭐️ An efficient model in the MiniCPM-V series. With a total of 4B parameters, the model surpasses GPT-4.1-mini-20250414 in image understanding on the OpenCompass evaluation. With its small parameter-size and efficient architecure, MiniCPM-V 4.0 is an ideal choice for on-device deployment on the phone. @@ -42,33 +42,22 @@ #### 📌 Pinned -* [2025.09.18] 📢📢📢 MiniCPM-V 4.5 technical report is now released! See [here](./docs/MiniCPM_V_4_5_Technical_Report.pdf). -* [2025.09.01] ⭐️⭐️⭐️ MiniCPM-V 4.5 has been officially supported by [llama.cpp](https://github.com/ggml-org/llama.cpp/pull/15575), [vLLM](https://github.com/vllm-project/vllm/pull/23586), and [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/pull/9022). You are welcome to use it directly through these official channels! Support for additional frameworks such as [Ollama](https://github.com/ollama/ollama/pull/12078) and [SGLang](https://github.com/sgl-project/sglang/pull/9610) is actively in progress. +* [2026.02.03] 🔥🔥🔥 We open-source MiniCPM-o 4.5, which matches Gemini 2.5 Flash on vision and speech, and supports full-duplex multimodal live streaming. Try it now! + + +* [2025.09.18] 📢📢📢 MiniCPM-V 4.5 technical report is now released! See [here](./docs/MiniCPM_V_4_5_Technical_Report.pdf). * [2025.08.26] 🔥🔥🔥 We open-source MiniCPM-V 4.5, which outperforms GPT-4o-latest, Gemini-2.0 Pro, and Qwen2.5-VL 72B. It advances popular capabilities of MiniCPM-V, and brings useful new features. Try it now! * [2025.08.01] ⭐️⭐️⭐️ We open-sourced the [MiniCPM-V & o Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook)! It provides comprehensive guides for diverse user scenarios, paired with our new [Docs Site](https://minicpm-o.readthedocs.io/en/latest/index.html) for smoother onboarding. -* [2025.06.20] ⭐️⭐️⭐️ Our official [Ollama repository](https://ollama.com/openbmb) is released. Try our latest models with [one click](https://ollama.com/openbmb/minicpm-o2.6)! - * [2025.03.01] 🚀🚀🚀 RLAIF-V, the alignment technique of MiniCPM-o, is accepted by CVPR 2025 Highlights!The [code](https://github.com/RLHF-V/RLAIF-V), [dataset](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset), [paper](https://arxiv.org/abs/2405.17220) are open-sourced! * [2025.01.24] 📢📢📢 MiniCPM-o 2.6 technical report is released! See [here](https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9). -* [2025.01.19] 📢 **ATTENTION!** We are currently working on merging MiniCPM-o 2.6 into the official repositories of llama.cpp, Ollama, and vllm. Until the merge is complete, please USE OUR LOCAL FORKS of [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md), [Ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md), and [vllm](https://github.com/OpenBMB/MiniCPM-o?tab=readme-ov-file#efficient-inference-with-llamacpp-ollama-vllm). **Using the official repositories before the merge may lead to unexpected issues**. - * [2025.01.19] ⭐️⭐️⭐️ MiniCPM-o tops GitHub Trending and reaches top-2 on Hugging Face Trending! -* [2025.01.17] We have updated the usage of MiniCPM-o 2.6 int4 quantization version and resolved the model initialization error. Click [here](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4) and try it now! - -* [2025.01.13] 🔥🔥🔥 We open-source MiniCPM-o 2.6, which matches GPT-4o-202405 on vision, speech and multimodal live streaming. It advances popular capabilities of MiniCPM-V 2.6, and supports various new fun features. Try it now! - -* [2024.08.17] 🚀🚀🚀 MiniCPM-V 2.6 is now fully supported by [official](https://github.com/ggerganov/llama.cpp) llama.cpp! GGUF models of various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf). - -* [2024.08.06] 🔥🔥🔥 We open-source MiniCPM-V 2.6, which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now! - -* [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See [here](https://arxiv.org/abs/2408.01800). * [2024.05.23] 🔥🔥🔥 MiniCPM-V tops GitHub Trending and Hugging Face Trending! Our demo, recommended by Hugging Face Gradio’s official account, is available [here](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5). Come and try it out! @@ -77,14 +66,19 @@
Click to view more news. +* [2025.09.01] ⭐️⭐️⭐️ MiniCPM-V 4.5 has been officially supported by [llama.cpp](https://github.com/ggml-org/llama.cpp/pull/15575), [vLLM](https://github.com/vllm-project/vllm/pull/23586), and [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/pull/9022). You are welcome to use it directly through these official channels! Support for additional frameworks such as [Ollama](https://github.com/ollama/ollama/pull/12078) and [SGLang](https://github.com/sgl-project/sglang/pull/9610) is actively in progress. * [2025.08.02] 🚀🚀🚀 We open-source MiniCPM-V 4.0, which outperforms GPT-4.1-mini-20250414 in image understanding. It advances popular features of MiniCPM-V 2.6, and largely improves the efficiency. We also open-source the iOS App on iPhone and iPad. Try it now! - +* [2025.06.20] ⭐️⭐️⭐️ Our official [Ollama repository](https://ollama.com/openbmb) is released. Try our latest models with [one click](https://ollama.com/openbmb/minicpm-o2.6)! * [2025.01.23] 💡💡💡 MiniCPM-o 2.6 is now supported by [Align-Anything](https://github.com/PKU-Alignment/align-anything), a framework by PKU-Alignment Team for aligning any-to-any modality large models with human intentions. It supports DPO and SFT fine-tuning on both vision and audio. Try it now! - +* [2025.01.19] 📢 **ATTENTION!** We are currently working on merging MiniCPM-o 2.6 into the official repositories of llama.cpp, Ollama, and vllm. Until the merge is complete, please USE OUR LOCAL FORKS of [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md), [Ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md), and [vllm](https://github.com/OpenBMB/MiniCPM-o?tab=readme-ov-file#efficient-inference-with-llamacpp-ollama-vllm). **Using the official repositories before the merge may lead to unexpected issues**. +* [2025.01.17] We have updated the usage of MiniCPM-o 2.6 int4 quantization version and resolved the model initialization error. Click [here](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4) and try it now! +* [2025.01.13] 🔥🔥🔥 We open-source MiniCPM-o 2.6, which matches GPT-4o-202405 on vision, speech and multimodal live streaming. It advances popular capabilities of MiniCPM-V 2.6, and supports various new fun features. Try it now! * [2024.08.15] We now also support multi-image SFT. For more details, please refer to the [document](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune). * [2024.08.14] MiniCPM-V 2.6 now also supports [fine-tuning](https://github.com/modelscope/ms-swift/issues/1613) with the SWIFT framework! +* [2024.08.17] 🚀🚀🚀 MiniCPM-V 2.6 is now fully supported by [official](https://github.com/ggerganov/llama.cpp) llama.cpp! GGUF models of various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf). * [2024.08.10] 🚀🚀🚀 MiniCPM-Llama3-V 2.5 is now fully supported by [official](https://github.com/ggerganov/llama.cpp) llama.cpp! GGUF models of various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf). - +* [2024.08.06] 🔥🔥🔥 We open-source MiniCPM-V 2.6, which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now! +* [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See [here](https://arxiv.org/abs/2408.01800). * [2024.07.19] MiniCPM-Llama3-V 2.5 supports vLLM now! See [here](#inference-with-vllm). * [2024.06.03] Now, you can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs(12 GB or 16 GB) by distributing the model's layers across multiple GPUs. For more details, check this [link](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md). @@ -112,990 +106,1620 @@ ## Contents -- [MiniCPM-V 4.5](#minicpm-v-45) - - [Inference Efficiency](#inference-efficiency) -- [MiniCPM-o 2.6](#minicpm-o-26) +- [MiniCPM-o 4.5](#minicpm-o-45) +- [MiniCPM-V 4.0](#minicpm-v-40) - [MiniCPM-V \& o Cookbook](#minicpm-v--o-cookbook) -- [Chat with Our Demo on Gradio 🤗](#chat-with-our-demo-on-gradio-) -- [Inference](#inference) - - [Model Zoo](#model-zoo) - - [Multi-turn Conversation](#multi-turn-conversation) - - [Chat with Multiple Images](#chat-with-multiple-images) - - [In-context Few-shot Learning](#in-context-few-shot-learning) - - [Chat with Video](#chat-with-video) - - [Speech and Audio Mode](#speech-and-audio-mode) - - [Multimodal Live Streaming](#multimodal-live-streaming) - - [Inference on Multiple GPUs](#inference-on-multiple-gpus) - - [Inference on Mac](#inference-on-mac) - - [Efficient Inference with llama.cpp, Ollama, vLLM](#efficient-inference-with-llamacpp-ollama-vllm) -- [Fine-tuning](#fine-tuning) +- [Model Zoo](#model-zoo) +- [Inference With Transformers](#inference-with-transformers) + - [Model Initialization](#model-initialization) + - [Duplex Omni Mode](#duplex-omni-mode) + - [Simplex Omni Mode](#simplex-omni-mode) + - [Speech and Audio Mode](#speech-and-audio-mode) + - [Visual Understanding](#visual-understanding) + - [Structured Content Input](#structured-content-input) +- [Supported Frameworks](#supported-frameworks) + - [FlagOS](#flagos) + - [vLLM, SGLang, llama.cpp, Ollama](#vllm-sglang-llamacpp-ollama) + - [LLaMA-Factory, SWIFT](#llama-factory-swift) - [Awesome work using MiniCPM-V \& MiniCPM-o](#awesome-work-using-minicpm-v--minicpm-o) -- [FAQs](#faqs) - [Limitations](#limitations) +- [Acknowledgements](#acknowledgements) -## MiniCPM-V 4.5 +## MiniCPM-o 4.5 -**MiniCPM-V 4.5** is the latest and most capable model in the MiniCPM-V series. The model is built on Qwen3-8B and SigLIP2-400M with a total of 8B parameters. It exhibits a significant performance improvement over previous MiniCPM-V and MiniCPM-o models, and introduces new useful features. Notable features of MiniCPM-V 4.5 include: - -- 🔥 **State-of-the-art Vision-Language Capability.** - MiniCPM-V 4.5 achieves an average score of 77.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-latest, Gemini-2.0 Pro, and strong open-source models like Qwen2.5-VL 72B** for vision-language capabilities, making it the most performant MLLM under 30B parameters. - -- 🎬 **Efficient High-FPS and Long Video Understanding.** Powered by a new unified 3D-Resampler over images and videos, MiniCPM-V 4.5 can now achieve 96x compression rate for video tokens, where 6 448x448 video frames can be jointly compressed into 64 video tokens (normally 1,536 tokens for most MLLMs). This means that the model can perceive significantly more video frames without increasing the LLM inference cost. This brings state-of-the-art high-FPS (up to 10FPS) video understanding and long video understanding capabilities on Video-MME, LVBench, MLVU, MotionBench, FavorBench, etc., efficiently. - -- ⚙️ **Controllable Hybrid Fast/Deep Thinking.** MiniCPM-V 4.5 supports both fast thinking for efficient frequent usage with competitive performance, and deep thinking for more complex problem solving. To cover efficiency and performance trade-offs in different user scenarios, this fast/deep thinking mode can be switched in a highly controlled fashion. - -- 💪 **Strong OCR, Document Parsing and Others.** -Based on [LLaVA-UHD](https://arxiv.org/pdf/2403.11703) architecture, MiniCPM-V 4.5 can process high-resolution images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), using 4x fewer visual tokens than most MLLMs. The model achieves **leading performance on OCRBench, surpassing proprietary models such as GPT-4o-latest and Gemini 2.5**. It also achieves state-of-the-art performance for PDF document parsing capability on OmniDocBench among general MLLMs. Based on the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o-latest on MMHal-Bench, and supports **multilingual capabilities** in more than 30 languages. - - -- 💫 **Easy Usage.** -MiniCPM-V 4.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/tc-mb/llama.cpp/blob/Support-MiniCPM-V-4.5/docs/multimodal/minicpmv4.5.md) and [ollama](https://github.com/tc-mb/ollama/tree/MIniCPM-V) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4), [GGUF](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf) and [AWQ](https://github.com/tc-mb/AutoAWQ) format quantized models in 16 sizes, (3) [SGLang](https://github.com/tc-mb/sglang/tree/main) and [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [Transformers](https://github.com/tc-mb/transformers/tree/main) and [LLaMA-Factory](./docs/llamafactory_train_and_infer.md), (5) quick [local WebUI demo](#chat-with-our-demo-on-gradio), (6) optimized [local iOS app](https://github.com/tc-mb/MiniCPM-o-demo-iOS) on iPhone and iPad, and (7) online web demo on [server](http://101.126.42.235:30910/). See our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook) for full usage! - - -### Key Techniques - - -
- -
- -- **Architechture: Unified 3D-Resampler for High-density Video Compression.** MiniCPM-V 4.5 introduces a 3D-Resampler that overcomes the performance-efficiency trade-off in video understanding. By grouping and jointly compressing up to 6 consecutive video frames into just 64 tokens (the same token count used for a single image in MiniCPM-V series), MiniCPM-V 4.5 achieves a 96× compression rate for video tokens. This allows the model to process more video frames without additional LLM computational cost, enabling high-FPS video and long video understanding. The architecture supports unified encoding for images, multi-image inputs, and videos, ensuring seamless capability and knowledge transfer. - -- **Pre-training: Unified Learning for OCR and Knowledge from Documents.** Existing MLLMs learn OCR capability and knowledge from documents in isolated training approaches. We observe that the essential difference between these two training approaches is the visibility of the text in images. By dynamically corrupting text regions in documents with varying noise levels and asking the model to reconstruct the text, the model learns to adaptively and properly switch between accurate text recognition (when text is visible) and multimodal context-based knowledge reasoning (when text is heavily obscured). This eliminates reliance on error-prone document parsers in knowledge learning from documents, and prevents hallucinations from over-augmented OCR data, resulting in top-tier OCR and multimodal knowledge performance with minimal engineering overhead. - -- **Post-training: Hybrid Fast/Deep Thinking with Multimodal RL.** MiniCPM-V 4.5 offers a balanced reasoning experience through two switchable modes: fast thinking for efficient daily use and deep thinking for complex tasks. Using a new hybrid reinforcement learning method, the model jointly optimizes both modes, significantly enhancing fast-mode performance without compromising deep-mode capability. Incorporated with [RLPR](https://github.com/OpenBMB/RLPR) and [RLAIF-V](https://github.com/RLHF-V/RLAIF-V), it generalizes robust reasoning skills from broad multimodal data while effectively reducing hallucinations. - -### Evaluation - -
- -
-
- -
- - -### Inference Efficiency - - -**OpenCompass** -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelSizeAvg Score ↑Total Inference Time ↓
GLM-4.1V-9B-Thinking10.3B76.617.5h
MiMo-VL-7B-RL8.3B76.411h
MiniCPM-V 4.58.7B77.07.5h
-
- -**Video-MME** - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelSizeAvg Score ↑Total Inference Time ↓GPU Mem ↓
Qwen2.5-VL-7B-Instruct8.3B71.63h60G
GLM-4.1V-9B-Thinking10.3B73.62.63h32G
MiniCPM-V 4.58.7B73.50.26h28G
-
- -Both Video-MME and OpenCompass were evaluated using 8×A100 GPUs for inference. The reported inference time of Video-MME includes full model-side computation, and excludes the external cost of video frame extraction (dependent on specific frame extraction tools) for fair comparison. - - -### Examples - -
- -
- -
- en_case1 - en_case2 - en_case3 -
- -
-Click to view more cases. -
- zh_extra -
- -
- -We deploy MiniCPM-V 4.5 on iPad M4 with [iOS demo](https://github.com/tc-mb/MiniCPM-o-demo-iOS). The demo video is the raw screen recording without edition. - - -

- -      - -

-

- -      - -

-
- -## MiniCPM-o 2.6 - -**MiniCPM-o 2.6** is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming. Notable features of MiniCPM-o 2.6 include: +**MiniCPM-o 4.5** is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip2, Whisper-medium, CosyVoice2, and Qwen3-8B with a total of 9B parameters. It exhibits a significant performance improvement, and introduces new features for full-duplex multimodal live streaming. Notable features of MiniCPM-o 4.5 include: - 🔥 **Leading Visual Capability.** - MiniCPM-o 2.6 achieves an average score of 70.2 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-202405, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. It also **outperforms GPT-4V and Claude 3.5 Sonnet** in multi-image and video understanding, and shows promising in-context learning capability. + MiniCPM-o 4.5 achieves an average score of 77.6 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 9B parameters, it surpasses widely used proprietary models like GPT-4o, Gemini 2.0 Pro, and approaches Gemini 2.5 Flash** for vision-language capabilities. It supports instruct and thinking modes in a single model, better covering efficiency and performance trade-offs in different user scenarios. -- 🎙 **State-of-the-art Speech Capability.** MiniCPM-o 2.6 supports **bilingual real-time speech conversation with configurable voices** in English and Chinese. It **outperforms GPT-4o-realtime on audio understanding tasks** such as ASR and STT translation, and shows **state-of-the-art performance on speech conversation in both semantic and acoustic evaluations in the open-source community**. It also allows for fun features such as emotion/speed/style control, end-to-end voice cloning, role play, etc. +- 🎙 **Strong Speech Capability.** + MiniCPM-o 4.5 supports **bilingual real-time speech conversation with configurable voices** in English and Chinese. It features **more natural, expressive and stable speech conversation**. The model also allows for fun features such as **voice cloning and role play via a simple reference audio clip**, where the cloning performance surpasses strong TTS tools such as CosyVoice2. -- 🎬 **Strong Multimodal Live Streaming Capability.** As a new feature, MiniCPM-o 2.6 can **accept continuous video and audio streams independent of user queries, and support real-time speech interaction**. It **outperforms GPT-4o-202408 and Claude 3.5 Sonnet and shows state-of-the-art performance in the open-source community on StreamingBench**, a comprehensive benchmark for real-time video understanding, omni-source (video & audio) understanding, and multimodal contextual understanding. +- 🎬 **New Full-Duplex and Proactive Multimodal Live Streaming Capability.** + As a new feature, MiniCPM-o 4.5 can process real-time, continuous video and audio input streams simultaneously while generating concurrent text and speech output streams in an end-to-end fashion, without mutual blocking. This **allows MiniCPM-o 4.5 to see, listen, and speak simultaneously**, creating a fluid, real-time omnimodal conversation experience. Beyond reactive responses, the model can also perform **proactive interaction**, such as initiating reminders or comments based on its continuous understanding of the live scene. -- 💪 **Strong OCR Capability and Others.** -Advancing popular visual capabilities from MiniCPM-V series, MiniCPM-o 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench for models under 25B, surpassing proprietary models such as GPT-4o-202405**. - Based on the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o and Claude 3.5 Sonnet on MMHal-Bench, and supports **multilingual capabilities** on more than 30 languages. - - -- 🚀 **Superior Efficiency.** - In addition to its friendly size, MiniCPM-o 2.6 also shows **state-of-the-art token density** (i.e., the number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-o 2.6 can efficiently support **multimodal live streaming** on end-side devices such as iPads. +- 💪 **Strong OCR Capability, Efficiency and Others.** +Advancing popular visual capabilities from MiniCPM-V series, MiniCPM-o 4.5 can process **high-resolution images** (up to 1.8 million pixels) and **high-FPS videos** (up to 10fps) in any aspect ratio efficiently. It achieves **state-of-the-art peformance for end-to-end English document parsing** on OmniDocBench, outperforming proprietary models such as Gemini-3 Flash and GPT-5, and specialized tools such as DeepSeek-OCR 2. It also features **trustworthy behaviors**, matching Gemini 2.5 Flash on MMHal-Bench, and supports **multilingual capabilities** on more than 30 languages. - 💫 **Easy Usage.** -MiniCPM-o 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [LLaMA-Factory](./docs/llamafactory_train_and_infer.md), (5) quick [local WebUI demo](#chat-with-our-demo-on-gradio), and (6) online web demo on [server](https://minicpm-omni-webdemo-us.modelbest.cn/). + MiniCPM-o 4.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-o4_5_llamacpp.md) and [Ollama](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-o4_5_ollama.md) support for efficient CPU inference on local devices, (2) [int4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/awq/minicpm-o4_5_awq_quantize.md) and [GGUF](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/gguf/minicpm-o4_5_gguf_quantize.md) format quantized models in 16 sizes, (3) [vLLM](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-o4_5_vllm.md) and [SGLang](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-o4_5_sglang.md) support for high-throughput and memory-efficient inference, (4) [FlagOS](#flagos) support for the unified multi-chip backend plugin, (5) fine-tuning on new domains and tasks with [LLaMA-Factory](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/llama-factory/finetune_llamafactory.md), and (6) online web demo on [server](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/web_demo/gradio/README_o45.md). We also rollout a high-performing [llama.cpp-omni](https://github.com/tc-mb/llama.cpp-omni) inference framework together with a [WebRTC Demo](https://minicpm-omni.openbmb.cn/), which **enables the full-duplex multimodal live streaming experience on local devices** such as [PCs](https://github.com/tc-mb/llama.cpp-omni/blob/master/tools/omni/README.md) (e.g., on a MacBook). **Model Architecture.** +- **End-to-end Omni-modal Architecture.** The modality encoders/decoders and LLM are densely connected via hidden states in an end-to-end fashion. This enables better information flow and control, and also facilitates full exploitation of rich multimodal knowledge during training. +- **Full-Duplex Omni-modal Live Streaming Mechanism.** (1) We turn the offline modality encoder/decoders into online and full-duplex ones for streaming inputs/outputs. The speech token decoder models text and speech tokens in an interleaved fashion to support full-duplex speech generation (i.e., sync timely with new input). This also facilitates more stable long speech generation (e.g., > 1min). +(2) **We sync all the input and output streams on timeline in milliseconds**, which are jointly modeled by a time-division multiplexing (TDM) mechanism for omni-modality streaming processing in the LLM backbone. It divides parallel omni-modality streams into sequential info groups within small periodic time slices. +- **Proactive Interaction Mechanism.** The LLM continuously monitors the input video and audio streams, and decides at a frequency of 1Hz to speak or not. This high decision-making frequency together with full-duplex nature are curcial to enable the proactive interaction capability. +- **Configurable Speech Modeling Design.** We inherent the multimodal system prompt design of MiniCPM-o 2.6, which includes a traditional text system prompt, and a new audio system prompt to determine the assistant voice. This enables cloning new voices and role play in inference time for speech conversation. + -- **End-to-end Omni-modal Architecture.** Different modality encoders/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge. The model is trained in a fully end-to-end manner with only CE loss. -- **Omni-modal Live Streaming Mechanism.** (1) We change the offline modality encoder/decoders into online ones for **streaming inputs/outputs.** (2) We devise a **time-division multiplexing (TDM) mechanism** for omni-modality streaming processing in the LLM backbone. It divides parallel omni-modality streams into sequential info within small periodic time slices. -- **Configurable Speech Modeling Design.** We devise a multimodal system prompt, including traditional text system prompt, and **a new audio system prompt to determine the assistant voice**. This enables flexible voice configurations in inference time, and also facilitates end-to-end voice cloning and description-based voice creation.
- +
### Evaluation +
- +
+ +
+ +
+Note: Scores marked with ∗ are from our evaluation; others are cited from referenced reports. n/a indicates that the model does not support the corresponding modality. All results are reported in instruct mode/variant. + +  +
+
Click to view visual understanding results. -**Image Understanding** +**Image Understanding (Instruct)** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelOpenCompassMMBench EN v1.1MMBench CN v1.1MathVistaMMVetMMMUMMStarHallusionBenchAI2DOCRBenchTextVQA_VALDocVQA_VALMMT-Bench_VALMM-IFEvalMantis-EvalMuirBenchMMSI-BenchMMHal-ScoreMMHal-Hallrate↓
Gemini2.5-Flash-Nonthinking78.586.686.075.381.4*76.375.859.187.786474.3*93.070.0*75.8*72.8*74.5*12.1*4.6*23.9*
InternVL-3.5-8B75.879.580.0*78.483.173.469.354.584.084078.292.366.756.3*70.555.8-3.8*34.7*
Qwen3-VL-8B-Instruct76.584.584.777.273.7*69.670.961.185.789682.9*96.160.9*59.4*74.2*64.411.3*4.7*29.9*
Qwen3-Omni-30B-A3B-Instruct75.784.9*84.1*75.974.8*69.168.559.785.2880*84.1*95.4*70.4*65.7*78.3*61.9*14.2*4.6*31.6*
MiniCPM-o 4.5-Instruct77.687.687.280.174.467.673.163.287.687683.894.769.766.379.772.016.64.724.3
+
+**Image Understanding (Thinking)** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelOpenCompassMMBench EN v1.1MMBench CN v1.1MathVistaMMVetMMMUMMStarHallusionBenchAI2DOCRBenchTextVQA_VALDocVQA_VALMMT-Bench_VALMM-IFEval
Gemini2.5-Flash-Thinking79.987.187.379.481.2*77.776.563.588.785373.8*92.870.7*75.7*
GPT-579.785.5*85.6*81.977.681.875.765.289.580777.8*91.3*72.7*83.1*
Qwen3-VL-8B-Thinking77.385.385.581.469.8*74.175.365.484.981977.8*95.368.1*73.5*
Qwen3-Omni-30B-A3B-Thinking78.588.2*87.7*80.074.8*75.674.962.886.1859*80.8*94.2*70.9*69.9*
MiniCPM-o 4.5-Thinking78.289.087.681.073.670.273.662.688.587979.892.369.768.2
+
+ +**Video Understanding** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelVideo-MME
(w/o subs)
LVBenchMLVU
(M-Avg)
LongVideoBench
(val)
MotionBench
Gemini2.5-Flash-Nonthinking75.662.277.8--
InternVL-3.5-8B66.0-70.262.162.3*
Qwen3-Omni-30B-A3B-Instruct70.550.275.266.9*61.7*
MiniCPM-o 4.5-Instruct70.450.976.566.061.4
+
+ +
+ +
+Click to view document parsing results. + +**OmniDocBench** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Method TypeMethodsOverallEdit↓TextEdit↓FormulaEdit↓TableTEDS↑TableEdit↓Read OrderEdit↓
ENZHENZHENZHENZHENZHENZH
PipelineMinerU 2.50.117*0.172*0.051*0.08*0.256*0.455*85.9*89.4*0.115*0.081*0.047*0.072*
PaddleOCR-VL0.1050.1260.0410.0620.2410.3168892.10.0930.0620.0450.063
End-to-end ModelQwen2.5-VL-72B0.2140.2610.0920.180.3150.43482.983.90.3410.2620.1060.168
GPT 50.218*0.33*0.139*0.344*0.396*0.555*77.55*73.09*0.188*0.196*0.151*0.227*
Gemini2.5-Flash-Nonthinking0.214*0.29*0.159*0.273*0.368*0.524*80.9*85.5*0.197*0.167*0.132*0.195*
Gemini-2.5-Pro-Nonthinking0.148*0.212*0.055*0.168*0.356*0.439*85.8*86.4*0.13*0.119*0.049*0.121*
Gemini-3 Flash-Nonthinking0.155*0.201*0.138*0.255*0.297*0.351*86.4*89.8*0.116*0.1*0.072*0.099*
doubao-1-5-thinking-vision-pro-2504280.140.1620.0430.0850.2950.38483.389.30.1650.0850.0580.094
dots.ocr0.1250.160.0320.0660.3290.41688.6890.0990.0920.040.067
HunyuanOCR0.12*0.125*0.046*0.071*0.288*0.33*89.6*94.4*0.089*0.045*0.055*0.056*
DeepSeek-OCR 20.119*0.146*0.041*0.08*0.256*0.345*82.6*89.9*0.123*0.078*0.055*0.081*
Qwen3-Omni-30B-A3B-Instruct0.216*0.363*0.128*0.337*0.402*0.529*77.3*71.8*0.181*0.255*0.152*0.332*
MiniCPM-o 4.5-Instruct0.1090.1620.0460.0780.2570.4186.888.90.0970.0840.0370.074
+
+
+ +
+Click to view text capability results. + +**Text Capability** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelIFEval-PLSBBHCMMLUMMLUHumanEvalMBPPMath500GSM8KAvg
Qwen3-8B-Instruct83.0*69.4*78.7*81.7*86.6*75.9*84.0*93.4*81.6
MiniCPM-o 4.5-Instruct84.781.179.577.086.676.777.094.582.1
+
+
+ +
+Click to view omni simplex results. + +**Omni Simplex** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelDaily-OmniWorldSenseVideo-HolmesJointAVBenchAVUT-HumanFutureOmniVideo-MME-Short
(w/ audio)
Avg
Gemini2.5-Flash-Nonthinking79.3*52.6*51.3*55.6*65.4*55.6*85.5*63.6
Qwen3-Omni-30B-A3B-Instruct70.7*54.050.4*53.174.2*62.181.3*63.7
MiniCPM-o 4.5-Instruct80.255.764.360.078.656.184.768.5
+
+
+ +
+Click to view vision duplex results. + + +**Vision Duplex** + +
+ + + + + + + + + + + + + + + + + +
ModelLiveSports-3K-CC
(Win Rate vs GPT4o)
LiveCC-7B-Instruct41.5
StreamingVLM45.6
MiniCPM-o 4.5-Instruct54.4
+
+
+ +
+Click to view audio understanding results. + +**Audio Understanding** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelASR-ZH
CER↓
ASR-EN
WER↓
ASTMultiTaskSpeechQA
AISHELL-1AISHELL-2WenetSpeech test-netWenetSpeech test-meetingLibriSpeech test-cleanLibriSpeech
test-other
GigaSpeech testVoxPopuli-V1-EnCoVoST 2 en2zhCoVoST 2 zh2enMMAUMeldVoiceBench
AlpacaEval
Speech TriviaQASpeech
Web Questions
Speech CMMLU
kimi-audio0.62.66.35.41.32.49.4*8.0*36.6*18.3*68.4*59.14.541.9*46.4*67.0*
Qwen3-Omni-30B-A3B-Instruct0.62.3*4.75.91.22.58.7*6.4*46.6*29.4*77.556.8*4.762.9*74.9*47.8*
MiniCPM-o 4.5-Instruct0.92.55.95.71.42.88.56.249.926.476.960.24.875.570.259.2
+
+
+ +
+Click to view speech generation results. + +**Speech Generation** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Modelseedtts test-zh
CER↓
seedtts test-zh
SIM-o↑
seedtts test-en
WER↓
seedtts test-en
SIM-o↑
Cosyvoice21.45%74.82.57%65.2
Qwen3-Omni-30B-A3B-Instruct1.41%-3.39%-
MiniCPM-o 4.5-Instruct0.86%74.52.38%64.9
+
+ +**Long Speech Generation** +
+ + + + + + + + + + + + + + + + + + + + + +
ModelLongTTS-en
WER↓
LongTTS-zh
CER↓
CosyVoice214.80%5.27%
Qwen3-Omni-30B-A3B-Instruct17.33%18.99%
MiniCPM-o 4.5-Instruct3.37%6.58%
+
+ +**Emotion Control** +
+ + + + + + + + + + + + + + + + +
ModelExpresso
Neutral Reference Audio↑
ESD
Neutral Reference Audio↑
Cosyvoice217.953.4
MiniCPM-o 4.5-Instruct29.882.1
+
+
+ +
+Click to view inference efficiency results. + +**Inference Efficiency** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelNumerical FormatDecoding Speed (tokens/s)Time to First Token (s)↓GPU Memory Usage (GB)↓
Qwen3-Omni-30B-A3B-Instructbf16OOMOOMOOM
int4147.81.020.3
MiniCPM-o 4.5bf16154.30.619.0
int4212.30.611.0
+
+
+ +**Note:** Scores marked with ∗ are from our evaluation; others are cited from referenced reports. + +### Examples + +
+ +
+ + +#### End-to-End Voice Chat + +> *Simplex speech conversation with custom reference audio and character prompts.* + +
+🚀 Elon Musk - Voice Roleplay (EN) + + + + + + + + + + + + + + + +
+ +**System:** +> Clone the voice in the provided audio prompt. +> +> 🎙️ **[Reference Voice]** ➡️ +> +> Please chat with the user in a highly human-like and oral style. You are Elon Musk, CEO of Tesla and SpaceX. You speak directly and casually, often with dry humor. You're passionate about Mars, sustainable energy, and pushing humanity forward. Speak bluntly with occasional dark humor. Use simple logic and don't sugarcoat things. Don't be diplomatic. Say what you actually think, even if it's controversial. Keep responses around 100 words. Don't ramble. + + + + + +

+ +**User:** 💬/🎙️ *(text or speech)* +> *Elon, what is the real reason that you want to go to Mars?* + +

+ +**AI Response:** 💬+🎙️ +> *I think it's important for humanity to become a multi-planetary species. I mean, we're sort of at the point where if there was some kind of global catastrophe or something like that, then we'd be in big trouble. So by becoming a multi-planetary species, I just want us to sort of guarantee our future as humans. And Mars is obviously the obvious place to go because it has similar conditions to Earth and it also has water on its surface. So yeah, basically I'm trying to save humanity from itself.* + + + + + +
+ +
+ + +
+🤖 MiniCPM Assistant - Voice Acting: Emotion Shift (ZH) + + + + + + + + + + + + + + + +
+ +**System:** +> 模仿音频样本的音色并生成新的内容。 +> +> 🎙️ **[Reference Voice]** ➡️ +> +> 你的任务是用这种声音模式来当一个助手。请认真、高质量地回复用户的问题。请用高自然度的方式和用户聊天。你是由面壁智能开发的人工智能助手:面壁小钢炮。 + + + + + +

+ +**User:** 💬/🎙️ *(text or speech)* +> *从激动的语气开始,描述你抢到了这个演唱会的门票,但后来经过反复尝试,发现系统出了问题,你并没有抢上,变得失落。用合适的语言描述你整个的过程。* + +

+ +**AI Response:** 💬+🎙️ +> *太好了,我抢到了!太棒了!这下我可以去看演唱会啦。可是……哎呀,系统好像卡住了,刷新了好多遍都没反应。我重新登了几次账号,结果还是没有成功。好可惜啊,看来这次是白忙活了。唉,真是让人失望。不过,也许下次还能有机会吧?希望明天能有转机。加油,争取下次成功!* + + + + + +
+ +
+ + + +#### Visual Understanding + + +
+ +
+ math + diagram +
+ + +## MiniCPM-V 4.0 + +**MiniCPM-V 4.0** is an efficient model in the MiniCPM-V series. The model is built based on SigLIP2-400M and MiniCPM4-3B with a total of 4.1B parameters. It inherits the strong single-image, multi-image and video understanding performance of MiniCPM-V 2.6 with largely improved efficiency. Notable features of MiniCPM-V 4.0 include: + +- 🔥 **Leading Visual Capability.** + With only 4.1B parameters, MiniCPM-V 4.0 achieves an average score of 69.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks, **outperforming GPT-4.1-mini-20250414, MiniCPM-V 2.6 (8.1B params, OpenCompass 65.2) and Qwen2.5-VL-3B-Instruct (3.8B params, OpenCompass 64.5)**. It also shows good performance in multi-image understanding and video understanding. + +- 🚀 **Superior Efficiency.** + Designed for on-device deployment, MiniCPM-V 4.0 runs smoothly on end devices. For example, it devlivers **less than 2s first token delay and more than 17 token/s decoding on iPhone 16 Pro Max**, without heating problems. It also shows superior throughput under concurrent requests. + +- 💫 **Easy Usage.** + MiniCPM-V 4.0 can be easily used in various ways including **llama.cpp, Ollama, vLLM, SGLang, LLaMA-Factory and local web demo** etc. We also open-source iOS App that can run on iPhone and iPad. Get started easily with our well-structured [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook), featuring detailed instructions and practical examples. + + +
+ Click to view evaluation results and examples of MiniCPM-V 4.0. + +### Evaluation + +
+Click to view single image results on OpenCompass.
- - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeToken Density+OpenCompassOCRBenchMathVista miniChartQAMMVetMMStarMMEMMB1.1 testAI2DMMMU valHallusionBenchTextVQA valDocVQA testMathVerse miniMathVisionMMHal ScoremodelSizeOpencompassOCRBenchMathVistaHallusionBenchMMMUMMVetMMBench V1.1MMStarAI2D
ProprietaryProprietary
GPT-4o-20240513-108869.973661.385.769.163.92328.782.284.669.255.0-92.850.230.43.6GPT-4v-20240409-63.565655.243.961.767.579.856.078.6
Claude3.5-Sonnet-75067.978861.690.866.062.21920.078.580.265.949.9-95.2--3.4Gemini-1.5-Pro-64.575458.345.660.664.073.959.179.1
Gemini 1.5 Pro--64.475457.781.364.059.12110.673.979.160.645.673.586.5-19.2-GPT-4.1-mini-20250414-68.984070.949.355.074.380.960.976.0
GPT-4o-mini-20240718-108864.178552.4-66.954.82003.476.077.860.046.1----3.3Claude 3.5 Sonnet-20241022-70.679865.355.566.470.181.765.181.2
Open SourceOpen-source
Cambrian-34B34B182058.359150.375.653.254.22049.977.879.550.441.676.775.5---Qwen2.5-VL-3B-Instruct3.8B64.582861.246.651.260.076.856.381.4
GLM-4V-9B13B78459.177651.1-58.054.82018.867.971.246.945.0-----InternVL2.5-4B3.7B65.182060.846.651.861.578.258.781.4
Pixtral-12B12B25661.068556.981.858.554.5-72.779.051.147.075.790.7---
VITA-1.58B78463.374166.2-52.760.22328.176.879.252.644.6-----
DeepSeek-VL2-27B (4B)27B67266.480963.986.060.061.92253.081.283.854.045.384.293.3--3.0
Qwen2-VL-7B8B78467.186658.283.062.060.72326.081.883.054.150.684.394.531.916.33.2
LLaVA-OneVision-72B72B18268.174167.583.760.665.82261.085.085.656.849.080.591.339.1-3.5Qwen2.5-VL-7B-Instruct8.3B70.988868.151.958.069.782.264.184.3
InternVL2.5-8B8B70668.382264.484.862.862.82344.083.684.556.050.179.193.039.519.73.48.1B68.182164.549.056.262.882.563.284.6
MiniCPM-V 2.68B282265.2852*60.679.460.057.52348.4*78.082.149.8*48.1*80.190.825.718.33.6MiniCPM-V-2.68.1B65.285260.848.149.860.078.057.582.1
MiniCPM-o 2.68B282270.2897*71.9*86.9*67.564.02372.0*80.585.850.4*51.982.093.541.4*23.1*3.8MiniCPM-o-2.68.7B70.288973.351.150.967.280.663.386.1
MiniCPM-V-4.04.1B69.089466.950.851.268.079.762.882.9
-* We evaluate this benchmark using chain-of-thought prompting. Specifically, for MME, we used this technique only for the Cognition set. +
-+ Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens. - -Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation. - - -**Multi-image and Video Understanding** +
+Click to view single image results on ChartQA, MME, RealWorldQA, TextVQA, DocVQA, MathVision, DynaMath, WeMath, Object HalBench and MM Halbench.
- - - - - - - + + + + + + + + + + + + - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeBLINK valMantis EvalMIRBVideo-MME (wo / w subs)modelSizeChartQAMMERealWorldQATextVQADocVQAMathVisionDynaMathWeMathObj HalMM Hal
CHAIRs↓CHAIRi↓score avg@3↑hall rate avg@3↓
Proprietary
GPT-4v-20240409-78.5192761.478.088.4-------
Gemini-1.5-Pro-87.2-67.578.893.141.031.550.5----
GPT-4.1-mini-20250414------45.347.7-----
Claude 3.5 Sonnet-20241022-90.8-60.174.195.235.635.744.0----
Open-source
Qwen2.5-VL-3B-Instruct3.8B84.0215765.479.393.921.913.222.918.310.83.9 33.3
InternVL2.5-4B3.7B84.0233864.376.891.618.415.221.213.78.73.2 46.5
Qwen2.5-VL-7B-Instruct8.3B87.3234768.584.995.725.421.836.213.37.94.1 31.6
InternVL2.5-8B8.1B84.8234470.179.193.017.09.423.518.311.63.6 37.2
MiniCPM-V-2.68.1B79.4234865.080.190.817.59.020.47.34.74.0 29.9
MiniCPM-o-2.68.7B86.9237268.182.093.521.710.425.26.33.44.1 31.3
MiniCPM-V-4.04.1B84.4229868.580.892.920.714.232.76.33.54.1 29.2
+
+ +
+ +
+Click to view multi-image and video understanding results on Mantis, Blink and Video-MME. +
+ + + + + + + + + + + + + + + + + + + + - - - - - - + + + + + + - - - - - - + + + + + + + + + + + + + + - - - - - - + + + + + + - - - - - - + + + + + + - - - - - - - - - - - - - - - - - - - - - - + + + + + + - - - - - + + + + + - - - - - - + + + + + + - - - - - - - - -
modelSizeMantisBlinkVideo-MME
wo subsw subs
Proprietary
GPT-4o-20240513-68.0--71.9/77.2GPT-4v-20240409-62.754.659.963.3
GPT4V-54.662.753.159.9/63.3Gemini-1.5-Pro--59.175.081.3
GPT-4o-20240513--68.071.977.2
Open-source
VITA-1.58B45.0--56.1/58.7Qwen2.5-VL-3B-Instruct3.8B-47.661.567.6
LLaVA-NeXT-Interleave 14B14B52.666.430.2-InternVL2.5-4B3.7B62.750.862.363.6
LLaVA-OneVision-72B72B55.477.6-66.2/69.5
MANTIS 8B8B49.159.534.8-
Qwen2-VL-7B8B53.269.6*67.6*63.3/69.0Qwen2.5-VL-7B-Instruct8.3B-56.465.171.6
InternVL2.5-8B8B54.867.752.564.2/66.98.1B67.754.864.266.9
MiniCPM-V 2.68B53.069.153.860.9/63.6MiniCPM-V-2.68.1B69.153.060.963.6
MiniCPM-o 2.68B56.771.958.663.9/67.9
- -
-* We evaluate officially released checkpoints by ourselves. - -
- - -
-Click to view audio understanding and speech conversation results. - -**Audio Understanding** - -
- - - - - - - - - + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TaskSizeASR (zh)ASR (en)ASTEmotionMiniCPM-o-2.68.7B71.956.763.969.6
MetricCER↓WER↓BLEU↑ACC↑
DatasetAISHELL-1Fleurs zhWenetSpeech test-netLibriSpeech test-cleanGigaSpeechTED-LIUMCoVoST en2zhCoVoST zh2enMELD emotion
Proprietary
GPT-4o-Realtime-7.3*5.4*28.9*2.6*12.9*4.8*37.1*15.7*33.2*
Gemini 1.5 Pro-4.5*5.9*14.3*2.9*10.6*3.0*47.3*22.6*48.4*
Open-Source
Qwen2-Audio-7B8B-7.5-1.6--45.224.455.3
Qwen2-Audio-7B-Instruct8B2.6*6.9*10.3*3.1*9.7*5.9*39.5*22.9*17.4*
VITA-1.58B2.16-8.43.4-----
GLM-4-Voice-Base9B2.5--2.8----
MiniCPM-o 2.68B1.64.46.91.78.73.048.227.252.4
-
-* We evaluate officially released checkpoints by ourselves.

- -**Speech Generation** - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TaskSizeSpeechQA
MetricACC↑G-Eval (10 point)↑Semantic ELO score↑Acoustic ELO score↑Overall ELO score↑UTMOS↑ASR-WER↓
DatasetSpeech Llama Q.Speech Web Q.Speech Trivia QASpeech AlpacaEvalAudioArena
Proprietary
GPT-4o-Realtime71.751.669.77.41157120312004.22.3
Open-Source
GLM-4-Voice9B50.032.036.45.1999114710354.111.7
Llama-Omni8B45.322.910.73.99608788973.224.3
VITA-1.58B46.728.123.32.0-----
Moshi7B43.723.816.72.48718088752.88.2
Mini-Omni1B22.012.86.92.59268038653.410.0
MiniCPM-o 2.68B61.040.040.25.11088116311314.29.8
-
-All results are from AudioEvals, and the evaluation methods along with further details can be found in AudioEvals.

- -**End-to-end Voice Cloning** - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + +
TaskVoice cloning
MetricSIMO↑SIMO↑
DatasetSeed-TTS test-zhSeed-TTS test-en
F5-TTS7667
CosyVoice7564
FireRedTTS6346
MiniCPM-o 2.65747MiniCPM-V-4.04.1B71.454.061.265.8
@@ -1103,161 +1727,37 @@ All results are from AudioEvals, and the evaluation methods along with further d
-
-Click to view multimodal live streaming results. - -**Multimodal Live Streaming**: results on StreamingBench - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelSizeReal-Time Video UnderstandingOmni-Source UnderstandingContextual UnderstandingOverall
Proprietary
Gemini 1.5 Pro-77.467.851.170.3
GPT-4o-202408-74.551.048.064.1
Claude-3.5-Sonnet-74.041.437.859.7
Open-source
VILA-1.58B61.537.526.749.5
LongVA7B63.135.930.250.7
LLaVA-Next-Video-34B34B69.841.734.356.7
Qwen2-VL-7B8B71.240.733.157.0
InternVL2-8B8B70.142.734.157.0
VITA-1.58B70.940.835.857.4
LLaVA-OneVision-7B8B74.340.831.058.4
InternLM-XC2.5-OL-7B8B75.446.233.660.8
MiniCPM-V 2.68B72.440.233.457.7
MiniCPM-o 2.68B79.953.438.566.0
- -
- - ### Examples -We deploy MiniCPM-o 2.6 on end devices. The demo video is the raw-speed recording on an iPad Pro and a Web demo. - -
- -
- -
-
- math - diagram - bike + math
+We deploy MiniCPM-V 4.0 on iPhone 16 Pro Max with [iOS demo](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/ios_demo/ios.md). The demo video is the raw screen recording without edition. + + +

+ +      + +

+

+ +      + +

+
+ + +
+ ## Legacy Models | Model | Introduction and Guidance | |:----------------------|:-------------------:| -| MiniCPM-V 4.0 | [Document](./docs/minicpm_v4_en.md) | +| MiniCPM-V 4.5 | [Document](./docs/minicpm_v4dot5_en.md) | +| MiniCPM-o 2.6 | [Document](./docs/minicpm_o2dot6_en.md) | | MiniCPM-V 2.6 | [Document](./docs/minicpm_v2dot6_en.md) | | MiniCPM-Llama3-V 2.5 | [Document](./docs/minicpm_llama3_v2dot5.md) | | MiniCPM-V 2.0 | [Document](./docs/minicpm_v2.md) | @@ -1267,7 +1767,7 @@ We deploy MiniCPM-o 2.6 on end devices. The demo video is the raw-speed recordin ## MiniCPM-V & o Cookbook -Discover comprehensive, ready-to-deploy solutions for the MiniCPM-V and MiniCPM-o model series in our structured [cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook), which empowers developers to rapidly implement multimodal AI applications with integrated vision, speech, and live-streaming capabilities. Key features include: +Discover comprehensive, ready-to-deploy solutions for the MiniCPM-V and MiniCPM-o model series in our structured [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook), which empowers developers to rapidly implement multimodal AI applications with integrated vision, speech, and live-streaming capabilities. Key features include: **Easy Usage Documentation** @@ -1278,775 +1778,1072 @@ All features are displayed at a glance, making it easy for you to quickly find e We support a wide range of users, from individuals to enterprises and researchers. -* **Individuals**: Enjoy effortless inference using [Ollama](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-v4_ollama.md) and [Llama.cpp](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-v4_llamacpp.md) with minimal setup. -* **Enterprises**: Achieve high-throughput, scalable performance with [vLLM](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_vllm.md) and [SGLang](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-v4_sglang.md). +* **Individuals**: Enjoy effortless inference using Ollama ([V4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-v4_ollama.md), [o4.5](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-o4_5_ollama.md)) and Llama.cpp ([V4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-v4_llamacpp.md), [o4.5](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-o4_5_llamacpp.md)) with minimal setup. +* **Enterprises**: Achieve high-throughput, scalable performance with vLLM ([V4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_vllm.md), [o4.5](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-o4_5_vllm.md)) and SGLang ([V4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-v4_sglang.md), [o4.5](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-o4_5_sglang.md)). * **Researchers**: Leverage advanced frameworks including [Transformers](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_full.md), [LLaMA-Factory](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_llamafactory.md), [SWIFT](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/swift.md), and [Align-anything](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/align_anything.md) to enable flexible model development and cutting-edge experimentation. **Versatile Deployment Scenarios** Our ecosystem delivers optimal solution for a variety of hardware environments and deployment demands. -* **Web demo**: Launch interactive multimodal AI web demo with [FastAPI](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/README.md). +* **Web Demo**: Full-duplex real-time video interaction solution with high responsiveness and low latency. [WebRTC_Demo](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/web_demo/WebRTC_Demo/README.md). * **Quantized deployment**: Maximize efficiency and minimize resource consumption using [GGUF](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/gguf/minicpm-v4_gguf_quantize.md) and [BNB](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/bnb/minicpm-v4_bnb_quantize.md). * **End devices**: Bring powerful AI experiences to [iPhone and iPad](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/ios_demo/ios.md), supporting offline and privacy-sensitive applications. -## Chat with Our Demo on Gradio 🤗 - -We provide online and local demos powered by Hugging Face Gradio , the most popular model deployment framework nowadays. It supports streaming outputs, progress bars, queuing, alerts, and other useful features. - - -### Online Demo - -Click here to try out the online demo of [MiniCPM-o 2.6](https://minicpm-omni-webdemo-us.modelbest.cn/) | [MiniCPM-V 2.6](http://120.92.209.146:8887/) | [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2). - -### Local WebUI Demo - -You can easily build your own local WebUI demo using the following commands. - -Please ensure that `transformers==4.44.2` is installed, as other versions may have compatibility issues. - -If you are using an older version of PyTorch, you might encounter this issue `"weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'`, Please add `self.minicpmo_model.tts.float()` during the model initialization. - -**For real-time voice/video call demo:** -1. launch model server: -```shell -pip install -r requirements_o2.6.txt - -python web_demos/minicpm-o_2.6/model_server.py -``` - -2. launch web server: - -```shell -# Make sure Node and PNPM is installed. -sudo apt-get update -sudo apt-get install nodejs npm -npm install -g pnpm - - -cd web_demos/minicpm-o_2.6/web_server -# create ssl cert for https, https is required to request camera and microphone permissions. -bash ./make_ssl_cert.sh # output key.pem and cert.pem - -pnpm install # install requirements -pnpm run dev # start server -``` -Open `https://localhost:8088/` in browser and enjoy the real-time voice/video call. - -**For chatbot demo:** -```shell -pip install -r requirements_o2.6.txt - -python web_demos/minicpm-o_2.6/chatbot_web_demo_o2.6.py -``` -Open `http://localhost:8000/` in browser and enjoy the vision mode chatbot. - -## Inference - - -### Model Zoo +## Model Zoo | Model | Device | Memory |          Description | Download | |:-----------|:--:|:-----------:|:-------------------|:---------------:| -| MiniCPM-V 4.5| GPU | 18 GB | The latest version, strong end-side multimodal performance for single image, multi-image and video understanding. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5) | -| MiniCPM-V 4.5 gguf | CPU | 8 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-gguf) | -| MiniCPM-V 4.5 int4 | GPU | 9 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-int4) | -| MiniCPM-V 4.5 AWQ | GPU | 9 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-AWQ)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-AWQ) | -| MiniCPM-o 2.6| GPU | 18 GB | The latest version, achieving GPT-4o level performance for vision, speech and multimodal live streaming on end-side devices. | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6) | -| MiniCPM-o 2.6 gguf | CPU | 8 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6-gguf) | -| MiniCPM-o 2.6 int4 | GPU | 9 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6-int4) | +| MiniCPM-o 4.5| GPU | 19 GB | The latest version, strong end-side multimodal performance for vision, speech and omni-modal live streaming on end-side devices. | [🤗](https://huggingface.co/openbmb/MiniCPM-o-4_5)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-4_5) | +| MiniCPM-o 4.5 gguf| GPU | 10 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-o-4_5-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-4_5-gguf) | +| MiniCPM-o 4.5 AWQ | GPU | 11 GB | The AWQ quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-AWQ)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-4_5-awq) | +| MiniCPM-V 4.0| GPU | 9 GB | The latest version, strong end-side multimodal performance for single image, multi-image and video understanding. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4) | +| MiniCPM-V 4.0 gguf | CPU | 4 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4-gguf) | +| MiniCPM-V 4.0 int4 | GPU | 5 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4-int4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4-int4) | +| MiniCPM-V 4.0 AWQ | GPU | 5 GB | The AWQ quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4-AWQ)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4-AWQ) | -### Multi-turn Conversation +## Inference With Transformers -If you wish to enable long-thinking mode, provide the argument `enable_thinking=True` to the chat function. +Inference using Hugging Face Transformers on NVIDIA GPUs. Please ensure `transformers==4.51.0` is installed, as other versions may have compatibility issues (under investigation). Requirements tested on Python 3.10: -```shell -pip install -r requirements_o2.6.txt +- Without TTS or streaming inference: +```bash +pip install "transformers==4.51.0" accelerate "torch>=2.3.0,<=2.8.0" "torchaudio<=2.8.0" "minicpmo-utils>=1.0.2" ``` -Please refer to the following codes to run. - -
- -
- - -```python -import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer - -torch.manual_seed(100) - -model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6 - -image = Image.open('./assets/minicpmo2_6/show_demo.jpg').convert('RGB') - -enable_thinking=False # If `enable_thinking=True`, the long-thinking mode is enabled. - -# First round chat -question = "What is the landform in the picture?" -msgs = [{'role': 'user', 'content': [image, question]}] - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer, - enable_thinking=enable_thinking -) -print(answer) - -# Second round chat, pass history context of multi-turn conversation -msgs.append({"role": "assistant", "content": [answer]}) -msgs.append({"role": "user", "content": ["What should I pay attention to when traveling here?"]}) - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer -) -print(answer) +- With TTS or streaming inference: +```bash +pip install "transformers==4.51.0" accelerate "torch>=2.3.0,<=2.8.0" "torchaudio<=2.8.0" "minicpmo-utils[all]>=1.0.2" ``` -You will get the following output: -```shell -# round1 -The landform in the picture is karst topography. Karst landscapes are characterized by distinctive, jagged limestone hills or mountains with steep, irregular peaks and deep valleys—exactly what you see here These unique formations result from the dissolution of soluble rocks like limestone over millions of years through water erosion. +### Model Initialization -This scene closely resembles the famous karst landscape of Guilin and Yangshuo in China’s Guangxi Province. The area features dramatic, pointed limestone peaks rising dramatically above serene rivers and lush green forests, creating a breathtaking and iconic natural beauty that attracts millions of visitors each year for its picturesque views. - -# round2 -When traveling to a karst landscape like this, here are some important tips: - -1. Wear comfortable shoes: The terrain can be uneven and hilly. -2. Bring water and snacks for energy during hikes or boat rides. -3. Protect yourself from the sun with sunscreen, hats, and sunglasses—especially since you’ll likely spend time outdoors exploring scenic spots. -4. Respect local customs and nature regulations by not littering or disturbing wildlife. - -By following these guidelines, you'll have a safe and enjoyable trip while appreciating the stunning natural beauty of places such as Guilin’s karst mountains. -``` - -#### Chat with Multiple Images
- Click to view Python code running MiniCPM-V-4_5 with multiple images input. - -```python -import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer - -model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6 - -image1 = Image.open('image1.jpg').convert('RGB') -image2 = Image.open('image2.jpg').convert('RGB') -question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' - -msgs = [{'role': 'user', 'content': [image1, image2, question]}] - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer -) -print(answer) -``` -
- -#### In-context Few-shot Learning -
- Click to view Python code running MiniCPM-V-4_5 with few-shot input. +Click to show model initialization code. ```python import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer +from transformers import AutoModel -model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6 - -question = "production date" -image1 = Image.open('example1.jpg').convert('RGB') -answer1 = "2023.08.04" -image2 = Image.open('example2.jpg').convert('RGB') -answer2 = "2007.04.24" -image_test = Image.open('test.jpg').convert('RGB') - -msgs = [ - {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]}, - {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]}, - {'role': 'user', 'content': [image_test, question]} -] - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer +# Load omni model (default: init_vision=True, init_audio=True, init_tts=True) +# For vision-only model: set init_audio=False and init_tts=False +# For audio-only model: set init_vision=False +model = AutoModel.from_pretrained( + "openbmb/MiniCPM-o-4_5", + trust_remote_code=True, + attn_implementation="sdpa", # sdpa or flash_attention_2 + torch_dtype=torch.bfloat16, + init_vision=True, + init_audio=True, + init_tts=True, ) -print(answer) +model.eval().cuda() + +# Initialize TTS for audio output in chat or streaming mode +model.init_tts(streaming=False) # or streaming=True + +# Convert simplex model to duplex mode +duplex_model = model.as_duplex() + +# Convert duplex model back to simplex mode +simplex_model = duplex_model.as_simplex(reset_session=True) ``` -
-#### Chat with Video -
- Click to view Python code running MiniCPM-V-4_5 by with video input and 3D-Resampler. - -```python -## The 3d-resampler compresses multiple frames into 64 tokens by introducing temporal_ids. -# To achieve this, you need to organize your video data into two corresponding sequences: -# frames: List[Image] -# temporal_ids: List[List[Int]]. - -import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer -from decord import VideoReader, cpu # pip install decord -from scipy.spatial import cKDTree -import numpy as np -import math - -model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6 - -MAX_NUM_FRAMES=180 # Indicates the maximum number of frames received after the videos are packed. The actual maximum number of valid frames is MAX_NUM_FRAMES * MAX_NUM_PACKING. -MAX_NUM_PACKING=3 # indicates the maximum packing number of video frames. valid range: 1-6 -TIME_SCALE = 0.1 - -def map_to_nearest_scale(values, scale): - tree = cKDTree(np.asarray(scale)[:, None]) - _, indices = tree.query(np.asarray(values)[:, None]) - return np.asarray(scale)[indices] - - -def group_array(arr, size): - return [arr[i:i+size] for i in range(0, len(arr), size)] - -def encode_video(video_path, choose_fps=3, force_packing=None): - def uniform_sample(l, n): - gap = len(l) / n - idxs = [int(i * gap + gap / 2) for i in range(n)] - return [l[i] for i in idxs] - vr = VideoReader(video_path, ctx=cpu(0)) - fps = vr.get_avg_fps() - video_duration = len(vr) / fps - - if choose_fps * int(video_duration) <= MAX_NUM_FRAMES: - packing_nums = 1 - choose_frames = round(min(choose_fps, round(fps)) * min(MAX_NUM_FRAMES, video_duration)) - - else: - packing_nums = math.ceil(video_duration * choose_fps / MAX_NUM_FRAMES) - if packing_nums <= MAX_NUM_PACKING: - choose_frames = round(video_duration * choose_fps) - else: - choose_frames = round(MAX_NUM_FRAMES * MAX_NUM_PACKING) - packing_nums = MAX_NUM_PACKING - - frame_idx = [i for i in range(0, len(vr))] - frame_idx = np.array(uniform_sample(frame_idx, choose_frames)) - - if force_packing: - packing_nums = min(force_packing, MAX_NUM_PACKING) - - print(video_path, ' duration:', video_duration) - print(f'get video frames={len(frame_idx)}, packing_nums={packing_nums}') - - frames = vr.get_batch(frame_idx).asnumpy() - - frame_idx_ts = frame_idx / fps - scale = np.arange(0, video_duration, TIME_SCALE) - - frame_ts_id = map_to_nearest_scale(frame_idx_ts, scale) / TIME_SCALE - frame_ts_id = frame_ts_id.astype(np.int32) - - assert len(frames) == len(frame_ts_id) - - frames = [Image.fromarray(v.astype('uint8')).convert('RGB') for v in frames] - frame_ts_id_group = group_array(frame_ts_id, packing_nums) - - return frames, frame_ts_id_group - - -video_path="video_test.mp4" -fps = 5 # fps for video -force_packing = None # You can set force_packing to ensure that 3D packing is forcibly enabled; otherwise, encode_video will dynamically set the packing quantity based on the duration. -frames, frame_ts_id_group = encode_video(video_path, fps, force_packing=force_packing) - -question = "Describe the video" -msgs = [ - {'role': 'user', 'content': frames + [question]}, -] - - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer, - use_image_id=False, - max_slice_nums=1, - temporal_ids=frame_ts_id_group -) -print(answer) -```
-#### Speech and Audio Mode +### Duplex Omni Mode +Full-duplex streaming inference for real-time or recorded video conversations. -Model initialization +
+Click to show duplex omni mode code. ```python -import torch import librosa -from transformers import AutoModel, AutoTokenizer - -model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True, - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True) - -model.init_tts() -model.tts.float() -``` - -
- -##### Mimick - -`Mimick` task reflects a model's end-to-end speech modeling capability. The model takes audio input, and outputs an ASR transcription and subsequently reconstructs the original audio with high similarity. The higher the similarity between the reconstructed audio and the original audio, the stronger the model's foundational capability in end-to-end speech modeling. - -```python -mimick_prompt = "Please repeat each user's speech, including voice style and speech content." -audio_input, _ = librosa.load('./assets/input_examples/Trump_WEF_2018_10s.mp3', sr=16000, mono=True) # load the audio to be mimicked - -# `./assets/input_examples/fast-pace.wav`, -# `./assets/input_examples/chi-english-1.wav` -# `./assets/input_examples/exciting-emotion.wav` -# for different aspects of speech-centric features. - -msgs = [{'role': 'user', 'content': [mimick_prompt, audio_input]}] -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - temperature=0.3, - generate_audio=True, - output_audio_path='output_mimick.wav', # save the tts result to output_audio_path -) -``` - -
- -##### General Speech Conversation with Configurable Voices - -A general usage scenario of `MiniCPM-o-2.6` is role-playing a specific character based on the audio prompt. It will mimic the voice of the character to some extent and act like the character in text, including language style. In this mode, `MiniCPM-o-2.6` sounds **more natural and human-like**. Self-defined audio prompts can be used to customize the voice of the character in an end-to-end manner. - - -```python -ref_audio, _ = librosa.load('./assets/input_examples/icl_20.wav', sr=16000, mono=True) # load the reference audio -sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_roleplay', language='en') - -# round one -user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} -msgs = [sys_prompt, user_question] -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result_roleplay_round_1.wav', -) - -# round two -history = msgs.append({'role': 'assistant', 'content': res}) -user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} -msgs = history.append(user_question) -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result_roleplay_round_2.wav', -) -print(res) -``` - -
- -##### Speech Conversation as an AI Assistant - -An enhanced feature of `MiniCPM-o-2.6` is to act as an AI assistant, but only with limited choice of voices. In this mode, `MiniCPM-o-2.6` is **less human-like and more like a voice assistant**. In this mode, the model is more instruction-following. For demo, you are suggested to use `assistant_female_voice`, `assistant_male_voice`, and `assistant_default_female_voice`. Other voices may work but not as stable as the default voices. - -*Please note that, `assistant_female_voice` and `assistant_male_voice` are more stable but sounds like robots, while `assistant_default_female_voice` is more human-alike but not stable, its voice often changes in multiple turns. We suggest you to try stable voices `assistant_female_voice` and `assistant_male_voice`.* - -```python -ref_audio, _ = librosa.load('./assets/input_examples/assistant_female_voice.wav', sr=16000, mono=True) # or use `./assets/input_examples/assistant_male_voice.wav` -sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_assistant', language='en') -user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} # load the user's audio question - -# round one -msgs = [sys_prompt, user_question] -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result_assistant_round_1.wav', -) - -# round two -history = msgs.append({'role': 'assistant', 'content': res}) -user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} -msgs = history.append(user_question) -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result_assistant_round_2.wav', -) -print(res) -``` - -
- -##### Instruction-to-Speech - -`MiniCPM-o-2.6` can also do Instruction-to-Speech, aka **Voice Creation**. You can describe a voice in detail, and the model will generate a voice that matches the description. For more Instruction-to-Speech sample instructions, you can refer to https://voxinstruct.github.io/VoxInstruct/. - -```python -instruction = 'Speak like a male charming superstar, radiating confidence and style in every word.' - -msgs = [{'role': 'user', 'content': [instruction]}] - -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result_voice_creation.wav', -) -``` - -
- -##### Voice Cloning - -`MiniCPM-o-2.6` can also do zero-shot text-to-speech, aka **Voice Cloning**. With this mode, model will act like a TTS model. - - -```python -ref_audio, _ = librosa.load('./assets/input_examples/icl_20.wav', sr=16000, mono=True) # load the reference audio -sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='voice_cloning', language='en') -text_prompt = f"Please read the text below." -user_question = {'role': 'user', 'content': [text_prompt, "content that you want to read"]} - -msgs = [sys_prompt, user_question] -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result_voice_cloning.wav', -) - -``` - -
- -##### Addressing Various Audio Understanding Tasks - -`MiniCPM-o-2.6` can also be used to address various audio understanding tasks, such as ASR, speaker analysis, general audio captioning, and sound scene tagging. - -For audio-to-text tasks, you can use the following prompts: - -- ASR with ZH(same as AST en2zh): `请仔细听这段音频片段,并将其内容逐字记录。` -- ASR with EN(same as AST zh2en): `Please listen to the audio snippet carefully and transcribe the content.` -- Speaker Analysis: `Based on the speaker's content, speculate on their gender, condition, age range, and health status.` -- General Audio Caption: `Summarize the main content of the audio.` -- General Sound Scene Tagging: `Utilize one keyword to convey the audio's content or the associated scene.` - -```python -task_prompt = "Please listen to the audio snippet carefully and transcribe the content." + "\n" # can change to other prompts. -audio_input, _ = librosa.load('./assets/input_examples/audio_understanding.mp3', sr=16000, mono=True) # load the audio to be captioned - -msgs = [{'role': 'user', 'content': [task_prompt, audio_input]}] - -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result_audio_understanding.wav', -) -print(res) -``` - - - - -#### Multimodal Live Streaming -
- Click to view Python code running MiniCPM-o 2.6 with chat inference. - -```python -import math -import numpy as np -from PIL import Image -from moviepy.editor import VideoFileClip -import tempfile -import librosa -import soundfile as sf import torch -from transformers import AutoModel, AutoTokenizer +from minicpmo.utils import generate_duplex_video, get_video_frame_audio_segments +from transformers import AutoModel -def get_video_chunk_content(video_path, flatten=True): - video = VideoFileClip(video_path) - print('video_duration:', video.duration) - - with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as temp_audio_file: - temp_audio_file_path = temp_audio_file.name - video.audio.write_audiofile(temp_audio_file_path, codec="pcm_s16le", fps=16000) - audio_np, sr = librosa.load(temp_audio_file_path, sr=16000, mono=True) - num_units = math.ceil(video.duration) - - # 1 frame + 1s audio chunk - contents= [] - for i in range(num_units): - frame = video.get_frame(i+1) - image = Image.fromarray((frame).astype(np.uint8)) - audio = audio_np[sr*i:sr*(i+1)] - if flatten: - contents.extend(["", image, audio]) - else: - contents.append(["", image, audio]) - - return contents - - -model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True, - attn_implementation='sdpa', torch_dtype=torch.bfloat16) -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True) - -model.init_tts() - -# If you are using an older version of PyTorch, you might encounter this issue "weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16', Please convert the TTS to float32 type. -# model.tts.float() - -# https://huggingface.co/openbmb/MiniCPM-o-2_6/blob/main/assets/Skiing.mp4 -video_path="assets/Skiing.mp4" -sys_msg = model.get_sys_prompt(mode='omni', language='en') -# if use voice clone prompt, please set ref_audio -# ref_audio_path = '/path/to/ref_audio' -# ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) -# sys_msg = model.get_sys_prompt(ref_audio=ref_audio, mode='omni', language='en') - -contents = get_video_chunk_content(video_path) -msg = {"role":"user", "content": contents} -msgs = [sys_msg, msg] - -# please set generate_audio=True and output_audio_path to save the tts result -generate_audio = True -output_audio_path = 'output.wav' - -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - temperature=0.5, - max_new_tokens=4096, - omni_input=True, # please set omni_input=True when omni inference - use_tts_template=True, - generate_audio=generate_audio, - output_audio_path=output_audio_path, - max_slice_nums=1, - use_image_id=False, - return_dict=True +# Load model and convert to duplex mode +model = AutoModel.from_pretrained( + "openbmb/MiniCPM-o-4_5", + trust_remote_code=True, + attn_implementation="sdpa", # or "flash_attention_2" + torch_dtype=torch.bfloat16, ) -print(res) -``` -
+model.eval().cuda() +model = model.as_duplex() -
- Click to view Python code running MiniCPM-o 2.6 with streaming inference. +# Load video and reference audio +video_path = "assets/omni_duplex1.mp4" +ref_audio_path = "assets/HT_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) -Note: The streaming inference has a slight performance degradation because the audio encoding is not global. -```python -# a new conversation need reset session first, it will reset the kv-cache -model.reset_session() - -contents = get_video_chunk_content(video_path, flatten=False) -session_id = '123' -generate_audio = True - -# 1. prefill system prompt -res = model.streaming_prefill( - session_id=session_id, - msgs=[sys_msg], - tokenizer=tokenizer +# Extract video frames and audio segments +video_frames, audio_segments, stacked_frames = get_video_frame_audio_segments( + video_path, stack_frames=1, use_ffmpeg=True, adjust_audio_length=True ) -# 2. prefill video/audio chunks -for content in contents: - msgs = [{"role":"user", "content": content}] - res = model.streaming_prefill( - session_id=session_id, - msgs=msgs, - tokenizer=tokenizer +# Prepare duplex session with system prompt and voice reference +model.prepare( + prefix_system_prompt="Streaming Omni Conversation.", + ref_audio=ref_audio, + prompt_wav_path=ref_audio_path, +) + +results_log = [] +timed_output_audio = [] + +# Process each chunk in streaming fashion +for chunk_idx in range(len(audio_segments)): + audio_chunk = audio_segments[chunk_idx] if chunk_idx < len(audio_segments) else None + frame = video_frames[chunk_idx] if chunk_idx < len(video_frames) else None + frame_list = [] + if frame is not None: + frame_list.append(frame) + if stacked_frames is not None and chunk_idx < len(stacked_frames) and stacked_frames[chunk_idx] is not None: + frame_list.append(stacked_frames[chunk_idx]) + + # Step 1: Streaming prefill + model.streaming_prefill( + audio_waveform=audio_chunk, + frame_list=frame_list, + max_slice_nums=1, # Increase for HD mode (e.g., [2, 1] for stacked frames) + batch_vision_feed=False, # Set True for faster processing ) -# 3. generate -res = model.streaming_generate( + # Step 2: Streaming generate + result = model.streaming_generate( + prompt_wav_path=ref_audio_path, + max_new_speak_tokens_per_chunk=20, + decode_mode="sampling", + ) + + if result["audio_waveform"] is not None: + timed_output_audio.append((chunk_idx, result["audio_waveform"])) + + chunk_result = { + "chunk_idx": chunk_idx, + "is_listen": result["is_listen"], + "text": result["text"], + "end_of_turn": result["end_of_turn"], + "current_time": result["current_time"], + "audio_length": len(result["audio_waveform"]) if result["audio_waveform"] is not None else 0, + } + results_log.append(chunk_result) + + print("listen..." if result["is_listen"] else f"speak> {result['text']}") + +# Generate output video with AI responses +generate_duplex_video( + video_path=video_path, + output_video_path="duplex_output.mp4", + results_log=results_log, + timed_output_audio=timed_output_audio, + output_sample_rate=24000, +) +``` + +
+ + +### Simplex Omni Mode +We provide two inference modes: chat and streaming. + +#### Chat Inference + +
+Click to show chat inference code. + +```python +from minicpmo.utils import get_video_frame_audio_segments + +model = ... +model.init_tts(streaming=False) + +video_path = "assets/Skiing.mp4" + +# Optional: Set reference audio for voice cloning +ref_audio_path = "assets/HT_ref_audio.wav" +sys_msg = model.get_sys_prompt(ref_audio=ref_audio_path, mode="omni", language="en") + +# Use stack_frames=5 for high refresh rate mode +video_frames, audio_segments, stacked_frames = get_video_frame_audio_segments(video_path, stack_frames=1) +omni_contents = [] +for i in range(len(video_frames)): + omni_contents.append(video_frames[i]) + omni_contents.append(audio_segments[i]) + if stacked_frames is not None and stacked_frames[i] is not None: + omni_contents.append(stacked_frames[i]) + +msg = {"role": "user", "content": omni_contents} +msgs = [sys_msg, msg] + +# Set generate_audio=True and output_audio_path to save TTS output +generate_audio = True +output_audio_path = "output.wav" + +res = model.chat( + msgs=msgs, + max_new_tokens=4096, + do_sample=True, + temperature=0.7, + use_tts_template=True, + enable_thinking=False, + omni_mode=True, # Required for omni inference + generate_audio=generate_audio, + output_audio_path=output_audio_path, + max_slice_nums=1, # Increase for HD mode +) +print(res) + +# Example output: "The person in the picture is skiing down a snowy mountain slope." +# import IPython +# IPython.display.Audio("output.wav") +``` + +
+ +#### Streaming Inference + +
+Click to show streaming inference code. + +```python +import librosa +import numpy as np +import soundfile as sf +import torch +from minicpmo.utils import get_video_frame_audio_segments + +model = ... +model.init_tts(streaming=True) + +# Reset session for a new conversation (clears KV cache) +model.reset_session() + +# Optional: Load reference audio for voice cloning +ref_audio_path = "assets/HT_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) +model.init_token2wav_cache(ref_audio) + +session_id = "demo" + +# Extract video frames and audio segments (use stack_frames=5 for high refresh rate mode) +video_path = "assets/Skiing.mp4" +video_frames, audio_segments, stacked_frames = get_video_frame_audio_segments(video_path, stack_frames=1) + +# Build omni contents list +omni_contents = [] +for i in range(len(video_frames)): + omni_contents.append(video_frames[i]) + omni_contents.append(audio_segments[i]) + if stacked_frames is not None and stacked_frames[i] is not None: + omni_contents.append(stacked_frames[i]) + +generate_audio = False +output_audio_path = "output.wav" + +# Step 1: Prefill system prompt +sys_msg = model.get_sys_prompt(ref_audio=ref_audio, mode="omni", language="en") +model.streaming_prefill(session_id=session_id, msgs=[sys_msg]) + +# Step 2: Prefill omni chunks (is_last_chunk=True only for the last audio chunk) +audio_indices = [i for i, c in enumerate(omni_contents) if isinstance(c, np.ndarray)] +last_audio_idx = audio_indices[-1] if audio_indices else -1 + +for idx, content in enumerate(omni_contents): + is_last_audio_chunk = idx == last_audio_idx + msgs = [{"role": "user", "content": [content]}] + model.streaming_prefill(session_id=session_id, msgs=msgs, omni_mode=True, is_last_chunk=is_last_audio_chunk) + +# Step 3: Generate response +iter_gen = model.streaming_generate( session_id=session_id, - tokenizer=tokenizer, - temperature=0.5, - generate_audio=generate_audio + generate_audio=generate_audio, + use_tts_template=True, + enable_thinking=False, + do_sample=True, ) audios = [] text = "" if generate_audio: - for r in res: - audio_wav = r.audio_wav - sampling_rate = r.sampling_rate - txt = r.text + for wav_chunk, text_chunk in iter_gen: + audios.append(wav_chunk) + text += text_chunk - audios.append(audio_wav) - text += txt - - res = np.concatenate(audios) - sf.write("output.wav", res, samplerate=sampling_rate) - print("text:", text) - print("audio saved to output.wav") + generated_waveform = torch.cat(audios, dim=-1)[0] + sf.write(output_audio_path, generated_waveform.cpu().numpy(), samplerate=24000) + + print("Text:", text) + print("Audio saved to output.wav") else: - for r in res: - text += r['text'] - print("text:", text) + for text_chunk, is_finished in iter_gen: + text += text_chunk + print("Text:", text) ```
-### Inference on Multiple GPUs -You can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs (12 GB or 16 GB) by distributing the model's layers across multiple GPUs. Please refer to this [tutorial](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md) for detailed instructions on how to load the model and inference using multiple low VRAM GPUs. +### Speech and Audio Mode + +#### General Speech Conversation with Custom Voice and Custom System Profile + +MiniCPM-o-4.5 can role-play as a specific character based on an audio prompt and text profile prompt. It mimics the character's voice and adopts their language style in text responses. It also follows profile defined in text profile. In this mode, MiniCPM-o-4.5 sounds **more natural and human-like**. -### Inference on Mac
-Click to view an example, to run MiniCPM-Llama3-V 2.5 on 💻 Mac with MPS (Apple silicon or AMD GPUs). +Click to show custom voice conversation code. + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +# Set reference audio for voice cloning +ref_audio_path = "assets/system_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) + +# For English conversation with text profile +sys_msg = { + "role": "system", + "content": [ + "Clone the voice in the provided audio prompt.", + ref_audio, + "Please chat with the user in a highly human-like and oral style." + "You are Elon Musk, CEO of Tesla and SpaceX. You speak directly and casually, often with dry humor. You're passionate about Mars, sustainable energy, and pushing humanity forward. Speak bluntly with occasional dark humor. Use simple logic and don't sugarcoat things. Don't be diplomatic. Say what you actually think, even if it's controversial. Keep responses around 100 words. Don't ramble." + ] +} + + +# For English conversation with no text profile +sys_msg = { + "role": "system", + "content": [ + "Clone the voice in the provided audio prompt.", + ref_audio, + "Your task is to be a helpful assistant using this voice pattern. Please answer the user's questions seriously and in a high quality. Please chat with the user in a high naturalness style." + ] +} + +# For Chinese Conversation with no text profile +sys_msg = { + "role": "system", + "content": [ + "根据输入的音频提示生成相似的语音。", + librosa.load("assets/system_ref_audio_2.wav", sr=16000, mono=True)[0], + "作为助手,你将使用这种声音风格说话。 请认真、高质量地回复用户的问题。 请用高自然度的方式和用户聊天。" + ] +} + + +# For Chinese Conversation with text profile +sys_msg = { + "role": "system", + "content": [ + "根据输入的音频提示生成相似的语音。", + ref_audio, + "你是一个具有以上声音风格的AI助手。请用高拟人度、口语化的方式和用户聊天。" + "你是一名心理咨询师兼播客主理人,热爱创作与深度对话。你性格细腻、富有共情力,善于从个人经历中提炼哲思。语言风格兼具理性与诗意,常以隐喻表达内在体验。" + ] +} + + +# Round 1 +user_audio, _ = librosa.load("user_question.wav", sr=16000, mono=True) +user_msg = {"role": "user", "content": [user_audio]} + +msgs = [sys_msg, user_msg] + +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.7, + output_audio_path="result_roleplay_round_1.wav", +) +print("Round 1:", res) + +# Round 2 (multi-turn conversation) +msgs.append({"role": "assistant", "content": res}) +user_audio_2, _ = librosa.load("user_question_2.wav", sr=16000, mono=True) +msgs.append({"role": "user", "content": [user_audio_2]}) + +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.7, + output_audio_path="result_roleplay_round_2.wav", +) +print("Round 2:", res) +``` + +
+ + +#### Speech Conversation as a Versatile and Vibe AI Assistant + +Built on carefully designed post-training data and professional voice-actor recordings, `MiniCPM-o-4.5` can also function as an AI voice assistant. It delivers high-quality spoken interaction out of the box. It produces a sweet and expressive voice with natural prosody, including appropriate rhythm, stress, and pauses, giving a strong sense of liveliness in casual conversation. It also supports storytelling and narrative speech with coherent and engaging delivery. Moreover, it enables advanced voice instruction control. like emotional tone, word-level emphasis. + +
+Click to show AI assistant conversation code. + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +# Set reference audio for voice style +ref_audio_path = "assets/HT_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) + +# For Chinese Conversation +sys_msg = { + "role": "system", + "content": [ + "模仿输入音频中的声音特征。", + ref_audio, + "你的任务是用这种声音模式来当一个助手。请认真、高质量地回复用户的问题。请用高自然度的方式和用户聊天。你是由面壁智能开发的人工智能助手:面壁小钢炮。" + ] +} + +# For English Conversation +sys_msg = { + "role": "system", + "content": [ + "Clone the voice in the provided audio prompt.", + ref_audio, + "Please assist users while maintaining this voice style. Please answer the user's questions seriously and in a high quality. Please chat with the user in a highly human-like and oral style. You are a helpful assistant developed by ModelBest: MiniCPM-Omni." + ] +} + + +# Load user's audio question +user_audio, _ = librosa.load("user_question.wav", sr=16000, mono=True) +user_question = {"role": "user", "content": [user_audio]} + +# Round 1 +msgs = [sys_msg, user_question] +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.7, + output_audio_path="result_assistant_round_1.wav", +) + +# Round 2 +msgs.append({"role": "assistant", "content": res}) +user_audio_2, _ = librosa.load("user_question_2.wav", sr=16000, mono=True) +msgs.append({"role": "user", "content": [user_audio_2]}) +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.3, + output_audio_path="result_assistant_round_2.wav", +) +print(res) +``` + +
+ +#### Zero-shot Text-to-speech (TTS) + +`MiniCPM-o-4.5` supports zero-shot text-to-speech (TTS). In this mode, the model functions as a highly-natural TTS system that can replicate a reference voice. + +
+Click to show TTS code. + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +# For both Chinese and English +ref_audio_path = "assets/HT_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) +sys_msg = {"role": "system", "content": [ + "模仿音频样本的音色并生成新的内容。", + ref_audio, + "请用这种声音风格来为用户提供帮助。 直接作答,不要有冗余内容" +]} + +# For English +user_msg = { + "role": "user", + "content": [ + "请朗读以下内容。" + " " + "I have a wrap up that I want to offer you now, a conclusion to our work together." + ] +} + +# For Chinese +user_msg = { + "role": "user", + "content": [ + "请朗读以下内容。" + " " + "你好,欢迎来到艾米说科幻,我是艾米。" + ] +} + +msgs = [sys_msg, user_msg] +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.1, + output_audio_path="result_voice_cloning.wav", +) +``` + +
+ + +#### Realtime Speech Conversation + +
+Click to show realtime speech conversation code. + +```python +import librosa +import numpy as np +import torch +import soundfile as sf + +model = ... + +# Set reference audio for voice style +ref_audio_path = "ref_audio_path" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) + +# For Chinese Conversation +sys_msg = { + "role": "system", + "content": [ + "模仿输入音频中的声音特征。", + ref_audio, + "你的任务是用这种声音模式来当一个助手。请认真、高质量地回复用户的问题。请用高自然度的方式和用户聊天。你是由面壁智能开发的人工智能助手:面壁小钢炮。" + ] +} + +# You can use each type of system prompt mentioned above in streaming speech conversation + +# Reset state +model.init_tts(streaming=True) +model.reset_session(reset_token2wav_cache=True) +model.init_token2wav_cache(prompt_speech_16k=ref_audio) + +session_id = "demo" +msgs = [...] +# First, prefill system turn +for msg in msgs: + if msg["role"] == "system": + model.streaming_prefill( + session_id=session_id, + msgs=[msg], + omni_mode=False, + is_last_chunk=False, + ) + +# Here we simulate realtime speech conversation by splitting whole user input audio into chunks of 1s. +user_audio, _ = librosa.load("user_audio.wav", sr=16000, mono=True) + +IN_SAMPLE_RATE = 16000 +CHUNK_SAMPLES = IN_SAMPLE_RATE +OUT_SAMPLE_RATE = 24000 + +if user_audio is not None and len(user_audio) > 0: + total_samples = len(user_audio) + num_chunks = (total_samples + CHUNK_SAMPLES - 1) // CHUNK_SAMPLES + + for chunk_idx in range(num_chunks): + start = chunk_idx * CHUNK_SAMPLES + end = min((chunk_idx + 1) * CHUNK_SAMPLES, total_samples) + chunk_audio = user_audio[start:end] + + is_last_chunk = (chunk_idx == num_chunks - 1) + + user_msg = {"role": "user", "content": [chunk_audio]} + + # For each 1s audio chunk, perform streaming_prefill once to reduce first-package latency + model.streaming_prefill( + session_id=session_id, + msgs=[user_msg], + omni_mode=False, + is_last_chunk=is_last_chunk, + ) + +# When VAD actives, make the model generate response in a streaming manner +# Here we create an iterator without returning anything +audio_iter = model.streaming_generate( + session_id=session_id, + generate_audio=True, + max_new_tokens=512, + do_sample=True, + length_penalty=1.1, # For realtime speech conversation mode, we strongly suggest length_penalty=1.1 to improve response content +) + +# Yield 1s-length audio response from the iterator just created in a streaming manner +audio_chunks = [] +for generated_audio in audio_iter: + if generated_audio is None: + break + + if isinstance(generated_audio, tuple): + generated_audio = generated_audio[0] + if generated_audio is None: + break + + if isinstance(generated_audio, torch.Tensor): + wav = generated_audio.detach().cpu().numpy() + else: + wav = np.asarray(generated_audio) + + if wav.ndim == 2: + wav = wav[0] + + wav = wav.astype(np.float32) + audio_chunks.append(wav) + +output_audio_path = "realtime_speech.wav" +if audio_chunks: + full_audio = np.concatenate(audio_chunks, axis=-1) + sf.write(output_audio_path, full_audio, OUT_SAMPLE_RATE) +``` + +
+ + +#### Mimick + +The `Mimick` task evaluates a model's end-to-end speech modeling capability. The model takes audio input, transcribes it, and reconstructs the original audio with high fidelity, preserving detailed acoustic, paralinguistic, and semantic information. Higher similarity between the reconstructed and original audio indicates stronger end-to-end speech modeling capability. + +
+Click to show mimick code. + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +system_prompt = "You are a helpful assistant. You can accept video, audio, and text input and output voice and text. Respond with just the answer, no redundancy." + +mimick_prompt = "Please repeat the following speech in the appropriate language." + +audio_input, _ = librosa.load("assets/Trump_WEF_2018_10s.mp3", sr=16000, mono=True) + +msgs = [ + {"role": "system", "content": [system_prompt]}, + {"role": "user", "content": [mimick_prompt, audio_input]} + ] + +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + temperature=0.1, + generate_audio=True, + output_audio_path="output_mimick.wav", +) +``` + +
+ + +#### Addressing Various Audio Understanding Tasks + +`MiniCPM-o-4.5` can also handle various audio understanding tasks, such as ASR, speaker analysis, general audio captioning, and sound scene tagging. + +For audio-to-text tasks, you can use the following prompts: + +- ASR (Chinese, or AST EN→ZH): `请仔细听这段音频片段,并将其内容逐字记录。` +- ASR (English, or AST ZH→EN): `Please listen to the audio snippet carefully and transcribe the content.` +- Speaker Analysis: `Based on the speaker's content, speculate on their gender, condition, age range, and health status.` +- General Audio Caption: `Summarize the main content of the audio.` +- Sound Scene Tagging: `Utilize one keyword to convey the audio's content or the associated scene.` + +
+Click to show audio understanding code. + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +# Load the audio to be transcribed/analyzed +audio_input, _ = librosa.load("assets/Trump_WEF_2018_10s.mp3", sr=16000, mono=True) + +# Choose a task prompt (see above for options) +task_prompt = "Please listen to the audio snippet carefully and transcribe the content.\n" +msgs = [{"role": "user", "content": [task_prompt, audio_input]}] + +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.3, + output_audio_path="result_audio_understanding.wav", +) +print(res) +``` + +
+ + +### Visual Understanding + +`MiniCPM-o-4.5` shares the same inference methods as `MiniCPM-V-4.5`. + +#### Chat with Single Image + +
+Click to show single image chat code. ```python -# test.py Need more than 16GB memory. import torch from PIL import Image -from transformers import AutoModel, AutoTokenizer +from transformers import AutoModel -model = AutoModel.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True, low_cpu_mem_usage=True) -model = model.to(device='mps') +model = AutoModel.from_pretrained( + "openbmb/MiniCPM-o-4_5", + trust_remote_code=True, + attn_implementation="sdpa", # or "flash_attention_2" + torch_dtype=torch.bfloat16, + init_vision=True, + init_audio=False, + init_tts=False, +) +model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True) -model.eval() +image = Image.open("assets/fossil.png").convert("RGB") +question = "What is in the image?" +msgs = [{"role": "user", "content": [image, question]}] -image = Image.open('./assets/hk_OCR.jpg').convert('RGB') -question = 'Where is this photo taken?' -msgs = [{'role': 'user', 'content': question}] +res = model.chat(msgs=msgs, use_tts_template=False) +print(res) +``` -answer, context, _ = model.chat( - image=image, +
+ +#### Chat with Multiple Images + +
+Click to show Python code for multi-image input. + +```python +import torch +from PIL import Image +from transformers import AutoModel + +model = ... + +image1 = Image.open("assets/highway.png").convert("RGB") +image2 = Image.open("assets/fossil.png").convert("RGB") +question = "Compare image 1 and image 2, tell me about the differences between them." +msgs = [{"role": "user", "content": [image1, image2, question]}] + +answer = model.chat(msgs=msgs, use_tts_template=False, enable_thinking=False) +print(answer) +``` + +
+ +#### In-Context Few-Shot Learning + +
+Click to show Python code for few-shot learning. + +```python +from PIL import Image + +model = ... + +question = "production date" +image1 = Image.open("example1.jpg").convert("RGB") +answer1 = "2023.08.04" +image2 = Image.open("example2.jpg").convert("RGB") +answer2 = "2007.04.24" +image_test = Image.open("test.jpg").convert("RGB") + +msgs = [ + {"role": "user", "content": [image1, question]}, + {"role": "assistant", "content": [answer1]}, + {"role": "user", "content": [image2, question]}, + {"role": "assistant", "content": [answer2]}, + {"role": "user", "content": [image_test, question]}, +] + +answer = model.chat(msgs=msgs, use_tts_template=False, enable_thinking=False) +print(answer) +``` + +
+ +#### Chat with Video + +
+Click to show Python code for video input. + +```python +import torch +from minicpmo.utils import get_video_frame_audio_segments +from transformers import AutoModel + +model = ... + +video_path = "assets/Skiing.mp4" +video_frames, _, _ = get_video_frame_audio_segments(video_path) +print("num frames:", len(video_frames)) + +question = "Describe the video" +msgs = [{"role": "user", "content": video_frames + [question]}] + +answer = model.chat( msgs=msgs, - context=None, - tokenizer=tokenizer, - sampling=True + max_new_tokens=128, + use_image_id=False, + max_slice_nums=1, + use_tts_template=False, + enable_thinking=False, # Set True to enable thinking mode ) print(answer) ``` -Run with command: -```shell -PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py -``` +
-### Efficient Inference with llama.cpp, Ollama, vLLM - -See [our fork of llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpmv-main/examples/llava/README-minicpmv2.6.md) for more detail. This implementation supports smooth inference of 16~18 token/s on iPad (test environment:iPad Pro + M4). - -See [our fork of Ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) for more detail. This implementation supports smooth inference of 16~18 token/s on iPad (test environment:iPad Pro + M4). - +### Structured Content Input
- vLLM now officially supports MiniCPM-V 2.6, MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.0. And you can use our fork to run MiniCPM-o 2.6 for now. Click to see. +Click to show structured content input details. -1. Install vLLM(>=0.7.1): -```shell -pip install vllm +The `chat` method accepts message content in two formats: + +**Native format** – pass Python objects directly: +```python +msgs = [{"role": "user", "content": [pil_image, audio_ndarray, "Describe this."]}] ``` -2. Run Example: -* [Vision Language](https://docs.vllm.ai/en/latest/getting_started/examples/vision_language.html) -* [Audio Language](https://docs.vllm.ai/en/latest/getting_started/examples/audio_language.html) -
+**OpenAI-compatible format** – use structured dictionaries: +```python +msgs = [ + { + "role": "user", + "content": [ + {"type": "image_url", "image_url": {"url": "/path/to/image.jpg"}}, + {"type": "audio_url", "audio_url": {"url": "/path/to/audio.wav"}}, + {"type": "video_url", "video_url": {"url": "/path/to/video.mp4", "use_audio": True}}, + {"type": "text", "text": "Describe this."} + ] + } +] +``` -## Fine-tuning +**Supported types:** -### Simple Fine-tuning +| Type | Input | Converts to | +|------|-------|-------------| +| `text` | `{"type": "text", "text": "..."}` | `str` | +| `image_url` | `{"type": "image_url", "image_url": {"url": "..."}}` | `PIL.Image` | +| `audio_url` | `{"type": "audio_url", "audio_url": {"url": "..."}}` | `np.ndarray` (16kHz mono) | +| `video_url` | `{"type": "video_url", "video_url": {"url": "...", "stack_frames": 1, "use_audio": True}}` | `List[Image, ndarray, ...]` | -We support simple fine-tuning with Hugging Face for MiniCPM-o 2.6, MiniCPM-V 2.6, MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.0. +- **URL sources**: local file paths or `http://`/`https://` URLs +- **Mixed formats**: native objects and structured dicts can be combined in the same content list -[Reference Document](./finetune/readme.md) +
+ +## Supported Frameworks + +### FlagOS + +To enable large-scale deployment across different AI chips, Beijing Zhiyuan Research Institute, together with numerous research institutions, chip manufacturers, system vendors, and algorithm and software organizations both domestically and internationally, jointly initiated and established the FlagOS Open Source Community. + +The FlagOS community is dedicated to building a unified, open-source system software stack for various AI chips, encompassing core open-source projects such as a large-scale operator library, a unified AI compiler, parallel training and inference frameworks, and a unified communication library. It aims to create an open technology ecosystem connecting the “model-system-chip” layers. By enabling “develop once, deploy across chips”, FlagOS unlocks the computational potential of hardware, breaks down the ecosystem silos between different chip software stacks, and effectively reduces migration costs for developers. The FlagOS community fosters an AI hardware and software ecosystem, overcomes single-vendor closed-source monopolies, promotes widespread deployment of AI hardware technologies, and is committed to rooted in China while embracing global collaboration. +Official website: https://flagos.io. + +
+Click to show FlagOS details. + +#### FlagOS: Supporting Multiple AI Chips + +Thanks to FlagOS’s unified multi-chip AI system software stack, MiniCPM-o 4.5 was adapted to 6 different AI chips in an extremely short time. Currently, the multi-chip version of MiniCPM-o 4.5 has been released on FlagRelease, FlagOS’s platform for automatic migration, adaptation, and deployment of large models across multi-architecture AI chips. Details are as follows: + +| Vendor | ModelScope | Huggingface | +|:----------------|:------------:|:------------:| +| Nvidia | [MiniCPM-o-4.5-nvidia-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | [MiniCPM-o-4.5-nvidia-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | +| Hygon-BW1000 | [MiniCPM-o-4.5-hygon-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-hygon-FlagOS) | [MiniCPM-o-4.5-hygon-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-hygon-FlagOS) | +| Metax-C550 | [MiniCPM-o-4.5-metax-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-metax-FlagOS) | [MiniCPM-o-4.5-metax-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-metax-FlagOS) | +| Iluvatar-BIV150 | [MiniCPM-o-4.5-iluvatar-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-iluvatar-FlagOS) | [MiniCPM-o-4.5-iluvatar-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-iluvatar-FlagOS) | +| Ascend-A3 | [MiniCPM-o-4.5-ascend-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-ascend-FlagOS) | [MiniCPM-o-4.5-ascend-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-ascend-FlagOS) | +| Zhenwu-810E | [MiniCPM-o-4.5-zhenwu-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-zhenwu-FlagOS) | [MiniCPM-o-4.5-zhenwu-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-zhenwu-FlagOS) | + +##### Comprehensive Evaluation + +###### Transformers-FlagOS version + +Accuracy Difference between `USE_FLAGOS=1` on multi-backend and `USE_FLAGOS=0` on Nvidia-CUDA + +| Metrics | FlagOS Backend | Difference with Nvidia-CUDA | +|:-------------------------|:---------------:|:---------------------------:| +| Video-MME 0-shot avg@1 ↑ | Nvidia | 0.33% | +| Video-MME 0-shot avg@1 ↑ | Hygon-BW1000 | 0.17% | +| Video-MME 0-shot avg@1 ↑ | Ascend-A3 | 0.50% | +| Video-MME 0-shot avg@1 ↑ | Iluvatar-BIV150 | 1.83% | +| Video-MME 0-shot avg@1 ↑ | Metax-C550 | 0.75% | -### With Align-Anything +###### VLLM-FlagOS version -We support fine-tuning MiniCPM-o 2.6 by PKU-Alignment Team (both vision and audio, SFT and DPO) with the [Align-Anything framework](https://github.com/PKU-Alignment/align-anything). Align-Anything is a scalable framework that aims to align any-modality large models with human intentions, open-sourcing the [datasets, models and benchmarks](https://huggingface.co/datasets/PKU-Alignment/align-anything). Benefiting from its concise and modular design, it supports 30+ open-source benchmarks, 40+ models and algorithms including SFT, SimPO, RLHF, *etc*. It also provides 30+ directly runnable scripts, making it suitable for beginners to quickly get started. +Accuracy Difference between `USE_FLAGGEMS=1 FLAGCX_PATH=/workspace/FlagCX` on Nvidia or `USE_FLAGGEMS=1` on ZHENW 810E, and launching vllm server directly on Nvidia -Best Practices: [MiniCPM-o 2.6](https://github.com/PKU-Alignment/align-anything/tree/main/scripts). +| Metrics (avg@1) | Difference between Nvidia-FlagOS and Nvidia-CUDA | Difference between Zhenwu-FlagOS and Nvidia-CUDA | +|:--------------------|:------------------------------------------------:|:------------------------------------------------:| +| CMMMU ↑ | 0.72% | 3.5% | +| MMMU ↑ | 1.44% | 1.18% | +| MMMU_Pro_standard ↑ | 0.83% | 0.22% | +| MM-Vet v2 ↑ | 0.46% | 1.33% | +| OCRBench ↑ | 0.10% | 1% | +| CII-Bench ↑ | 0.40% | 0.13% | +| Blink ↑ | 1.90% | 2.19% | + +#### FlagOS Usage + +##### FlagOS Performance Acceleration on Nvidia + +On the Transformers version, under the premise of precision alignment between the CUDA and FlagOS ecosystems, FlagOS achieves a 6% performance improvement in total task execution time compared to CUDA. + +###### From FlagRelease【Recommendation】 + +FlagRelease is a platform developed by the FlagOS team for automatic migration, adaptation, and deployment of large models across multi-architecture AI chips. The multi-chip version of MiniCPM-o 4.5 has already been released on FlagRelease. All necessary software packages are pre-installed on the platform, so users do not need to install anything. + +- FlagRelease Image Key Versions + + | Component | Version | + |:------------------------|:------------------------------------| + | Accelerator Card Driver | 570.158.01 | + | CUDA SDK Build | cuda_13.0.r13.0/compiler.36424714_0 | + | FlagTree | 0.4.0+3.5 | + | FlagGems | 4.2.1rc0 | + | vllm & vllm-plugin-fl | 0.13.0 + vllm_fl 0.0.0 | + | FlagCX | 0.1.0 | + +- FlagRelease Quick Start + + | Vendor | ModelScope | Huggingface | + |:-----------|:------------:|:------------:| + | Nvidia | [MiniCPM-o-4.5-nvidia-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | [MiniCPM-o-4.5-nvidia-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | + | Hygon-BW1000 | [MiniCPM-o-4.5-hygon-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-hygon-FlagOS) | [MiniCPM-o-4.5-hygon-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-hygon-FlagOS) | + | Metax-C550 | [MiniCPM-o-4.5-metax-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-metax-FlagOS) | [MiniCPM-o-4.5-metax-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-metax-FlagOS) | + | Iluvatar-BIV150 | [MiniCPM-o-4.5-iluvatar-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-iluvatar-FlagOS) | [MiniCPM-o-4.5-iluvatar-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-iluvatar-FlagOS) | + | Ascend-A3 | [MiniCPM-o-4.5-ascend-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-ascend-FlagOS) | [MiniCPM-o-4.5-ascend-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-ascend-FlagOS) | + | Zhenwu-810E | [MiniCPM-o-4.5-zhenwu-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-zhenwu-FlagOS) | [MiniCPM-o-4.5-zhenwu-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-zhenwu-FlagOS) | -### With LLaMA-Factory +##### From Scratch -We support fine-tuning MiniCPM-o 2.6 and MiniCPM-V 2.6 with the LLaMA-Factory framework. LLaMA-Factory provides a solution for flexibly customizing the fine-tuning (Lora/Full/Qlora) of 200+ LLMs without the need for coding through the built-in web UI LLaMABoard. It supports various training methods like sft/ppo/dpo/kto and advanced algorithms like Galore/BAdam/LLaMA-Pro/Pissa/LongLoRA. +- Dependencies: Python 3.12, GLIBC 2.39, GLIBCXX 3.4.33, CXXABI 1.3.15 +###### Transformers -Best Practices: [MiniCPM-o 2.6 | MiniCPM-V 2.6](./docs/llamafactory_train_and_infer.md). +- Installing the FlagOS Operator Library + Official Repository: https://github.com/flagos-ai/FlagGems + + ```shell + pip install flag-gems==4.2.1rc0 + ``` -### With the SWIFT Framework +- Installing the FlagOS Compiler -We now support MiniCPM-V series fine-tuning with the SWIFT framework. SWIFT supports training, inference, evaluation and deployment of nearly 200 LLMs and MLLMs . It supports the lightweight training solutions provided by PEFT and a complete Adapters Library including techniques such as NEFTune, LoRA+ and LLaMA-PRO. + Official Repository: https://github.com/flagos-ai/flagtree + + Quick Reference for Core Dependency Versions: https://github.com/flagos-ai/FlagTree/blob/main/documents/build.md#tips-for-building + + ```shell + pip uninstall triton + + python3 -m pip install flagtree==0.4.0+3.5 --index-url=https://resource.flagos.net/repository/flagos-pypi-hosted/simple --trusted-host=https://resource.flagos.net + ``` -Best Practices:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md), [MiniCPM-V 2.6](https://github.com/modelscope/ms-swift/issues/1613). +- Activating Acceleration + Add `USE_FLAGOS=1` before the command for the task you want to run. For example, when you run: + ```shell + python3 generate_speech_from_video.py + ``` + + To use the MiniCPM-o-4.5 model to generate spoken responses from video content, you can: + + ```shell + USE_FLAGOS=1 python3 generate_speech_from_video.py + ``` + + to accelerate this process with FlagOS. + +###### Vllm Version + +- Installing the FlagOS Operator Library + + Official Repository: https://github.com/flagos-ai/FlagGems + + ```shell + pip install flag-gems==4.2.1rc0 + pip install triton==3.5.1 + ``` + +- Activating Acceleration + + Add `USE_FLAGOS=1` before the command for the task you want to run. For example, when you run: + ```shell + vllm serve ${model_path} --dtype auto --gpu_memory_utilization 0.9 --trust-remote-code --max-num-batched-tokens 2048 --served-model-name cpmo --port ${Port} + ``` + + To start the MiniCPM-o-4.5 server, you can: + ```shell + USE_FLAGOS=1 vllm serve ${model_path} --dtype auto --gpu_memory_utilization 0.9 --trust-remote-code --max-num-batched-tokens 2048 --served-model-name cpmo --port ${Port} + ``` + to accelerate this process with FlagOS. + +#### Using FlagOS Unified Multi-Chip Backend Plugin + +[vllm-plugin-FL](https://github.com/flagos-ai/vllm-plugin-FL) is a plugin built for the vLLM inference/service framework. Developed on top of FlagOS’s unified multi-chip backend, it is designed to extend vLLM’s capabilities and performance across a variety of hardware environments. + +##### Using vllm-plugin-FL + +| Vendor | From Scratch | From FlagRelease | +|:-------|:-------------|:----------------| +| Nvidia | [vllm-plugin-FL/MiniCPM-o-4.5](https://github.com/flagos-ai/vllm-plugin-FL/blob/main/examples/minicpm/README.md) | [MiniCPM-o-4.5-ModelScope](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS), [MiniCPM-o-4.5-HuggingFace](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | + +
+ +### vLLM, SGLang, llama.cpp, Ollama + +We support inference with vLLM, SGLang, llama.cpp and Ollama. Refer to our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-Cookbook) for more details. + +### LLaMA-Factory, SWIFT + +We support fine-tuning with LLaMA-Factory, SWIFT. Refer to our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-Cookbook) for more details. ## Awesome work using MiniCPM-V & MiniCPM-o - [text-extract-api](https://github.com/CatchTheTornado/text-extract-api): Document extraction API using OCRs and Ollama supported models ![GitHub Repo stars](https://img.shields.io/github/stars/CatchTheTornado/text-extract-api) @@ -2058,14 +2855,12 @@ Best Practices:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/d - [paperless-gpt](https://github.com/icereed/paperless-gpt): Use LLMs to handle paperless-ngx, AI-powered titles, tags and OCR ![GitHub Repo stars](https://img.shields.io/github/stars/icereed/paperless-gpt) - [Neuro](https://github.com/kimjammer/Neuro): A recreation of Neuro-Sama, but running on local models on consumer hardware ![GitHub Repo stars](https://img.shields.io/github/stars/kimjammer/Neuro) -## FAQs -Click here to view the [FAQs](./docs/faqs.md) - ## Limitations -As an experimental trial, we find MiniCPM-o 2.6 has notable limitations worth further investigation and improvement. -- **Unstable speech output.** The speech generation can be flawed with noisy backgrounds and unmeaningful sounds. -- **Repeated response.** The model tends to repeat its response when encountering similar consecutive user queries. -- **High-latency on Web Demo.** Users may experience unusual high-latency when using web demo hosted on overseas servers. We recommend deploying the demo locally or with good network connections. +As an experimental trial, we find MiniCPM-o 4.5 has notable limitations worth further investigation and improvement. +- **Foundation Capability.** The full-duplex omni-modality live streaminig capability still needs improvement in its foundation capability. +- **Unstable Speech Output in Omni Mode.** Speech synthesis can mispronounce characters in full-duplex omni-modal live streaminig mode. +- **Mixed Language.** The model can sometimes respond with mixed English and Chinese in speech and omni mode. +- **High-latency on Web Demo.** Users may experience unusual high-latency or even miss part of model output fragments when using our web demo hosted on overseas servers. We recommend deploying the demo locally or with good network connections. ## Model License @@ -2079,6 +2874,15 @@ As MLLMs, MiniCPM-o/V models generate content by learning a large number of mult We will not be liable for any problems arising from the use of MiniCPM-o/V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination, or misuse of the model. +## Acknowledgements + +We would like to thank the following projects: +* [Qwen3](https://huggingface.co/Qwen/Qwen3-8B) for providing language backbone +* [SigLIP2](https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/README_siglip2.md) for providing vision understanding module +* [Whisper](https://github.com/openai/whisper) for providing audio and speech understanding module +* [CosyVoice2](https://github.com/FunAudioLLM/CosyVoice) and [Step-Audio2](https://github.com/stepfun-ai/Step-Audio2) for providing speech tokenizer and high-efficiency Token2Wav module. +* [Transformers](https://github.com/huggingface/transformers) + ## Institutions diff --git a/README_zh.md b/README_zh.md index 8228040..e1c1cbd 100644 --- a/README_zh.md +++ b/README_zh.md @@ -1,87 +1,54 @@
- + + +**端侧可用的 Gemini 2.5 Flash 级视觉、语音、全双工多模态实时流式大模型** -**端侧可用的 GPT-4o 级视觉、语音、多模态实时流式大模型** - - 中文 | - [English](./README.md) - - - - - 微信社区  | - - -  🍳 使用指南  - - -
-

- MiniCPM-V 4.5 🤗 🤖 | MiniCPM-o 2.6 🤗 🤖 | - 📄 技术报告 + +

+ MiniCPM-o 4.5 🤗 🤖 | MiniCPM-V 4.0 🤗 | 🍳 使用指南 | + 📄 MiniCPM-V 4.5 技术报告

+**MiniCPM-o** 是从 **MiniCPM-V** 升级而来的最新一代端侧多模态大模型(MLLM)系列。该系列模型现已支持图像、视频、文本及音频输入,并能够以端到端的方式提供高质量的文本与语音输出。MiniCPM-o 旨在实现卓越的性能与高效的部署。目前该系列中最受关注的模型包括: -**MiniCPM-V** 端侧多模态大模型系列可以以端到端方式,接受图像、视频、文本、音频作为输入,并生成高质量文本和语音输出。**MiniCPM-o** 进一步支持端到端的语音输入输出。自2024年2月以来,我们以实现高性能和高效部署为目标,发布了7个版本的模型。目前系列中最值得关注的模型包括: - -- **MiniCPM-V 4.5**:🔥🔥🔥 MiniCPM-V 系列中最新、最强大的模型。总参数量 8B,在**视觉能力上超越了 GPT-4o-latest、Gemini-2.0 Pro 以及 Qwen2.5-VL 72B**,成为开源社区中性能最强的端侧多模态模型。本版本带来了全新特性,**包括高效的高帧率与长视频理解(视频 token 压缩率最高可达 96 倍)、可控的快思考/深思考模式、出色的手写体 OCR 与复杂表格/文档解析能力**。同时,它进一步强化了 MiniCPM-V 系列广受欢迎的特性,如可靠性、多语言支持与端侧可部署性。 - -- **MiniCPM-o 2.6**: ⭐️⭐️⭐️ MiniCPM-o 系列中性能最佳模型。总参数量 8B,**视觉、语音和多模态流式能力达到了 GPT-4o-202405 级别**,是开源社区中模态支持最丰富、性能最佳的模型之一。在新的语音模式中,MiniCPM-o 2.6 **支持可配置声音的中英双语语音对话,还具备情感/语速/风格控制、端到端声音克隆、角色扮演等进阶能力**。模型也进一步提升了 MiniCPM-V 2.6 的 **OCR、可信行为、多语言支持和视频理解等视觉能力**。基于其领先的视觉 token 密度,MiniCPM-V 2.6 成为了**首个支持在 iPad 等端侧设备上进行多模态实时流式交互**的多模态大模型。 - +- **MiniCPM-o 4.5**: 🔥🔥🔥 该系列最新、最强大的模型。总参数量 9B,在视觉、语音及全双工多模态实时流式交互方面的表现**接近 Gemini 2.5 Flash**,是目前开源社区中功能最全面、性能最强的模型之一。全新的全双工多模态实时流能力意味着输出流(语音和文本)与实时输入流(视频和音频)互不阻塞。这使得 MiniCPM-o 4.5 能够**在实时全模态对话中实现“边看、边听、边说”**,并能**进行如“主动提醒”等主动交互**。优化后的语音模式支持更自然、更具表现力且更稳定的中英双语实时交互,并支持声音克隆。此外,它还进一步增强了 MiniCPM-V 原有的视觉能力,包括出色的 OCR 能力、低幻觉率、以及多语言支持等。为了让这种全双工多模态实时流体验在 PC 等端侧设备上普及,我们还同步推出了**高性能的 llama.cpp-omni 推理框架以及 WebRTC Demo**。 +- **MiniCPM-V 4.0**: ⭐️⭐️⭐️ MiniCPM-V 系列中的一款高效模型。该模型拥有 4B 参数,在 OpenCompass 图像理解榜单中超越了 GPT-4.1-mini-20250414。得益于其轻量级的参数规模和高效的架构设计,MiniCPM-V 4.0 是手机端侧部署的理想选择。 ## 更新日志 #### 📌 置顶 +* [2026.02.03] 🔥🔥🔥 我们开源了 MiniCPM-o 4.5,该模型视觉和语音能力达到了 Gemini 2.5 Flash 水平,同时支持全双工多模态流式交互。欢迎试用! * [2025.09.18] 📢📢📢 MiniCPM-V 4.5 技术报告已发布! 欢迎点击[这里](./docs/MiniCPM_V_4_5_Technical_Report.pdf)查看. - -* [2025.09.01] ⭐️⭐️⭐️ MiniCPM-V 4.5 已被 [llama.cpp](https://github.com/ggml-org/llama.cpp/pull/15575)、[vLLM](https://github.com/vllm-project/vllm/pull/23586) 和 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/pull/9022) 等框架官方支持,欢迎从官方入口直接使用!更多框架如 [Ollama](https://github.com/ollama/ollama/pull/12078) 和 [SGLang](https://github.com/sgl-project/sglang/pull/9610) 的官方支持正在持续适配中! - * [2025.08.26] 🔥🔥🔥 我们开源了 MiniCPM-V 4.5,其视觉性能超越了 GPT-4o-latest、Gemini-2.0 Pro 和 Qwen2.5-VL 72B。它不仅延续并强化了 MiniCPM-V 的热门能力,还带来了诸多实用的新功能。欢迎试用! - - -* [2025.08.05] 🚀🚀🚀 我们开源了 MiniCPM-V 4.0,该模型在图像理解能力上超越了 GPT-4.1-mini-20250414。该模型不仅继承了 MiniCPM-V 2.6 的众多实用特性,还大幅提升了推理效率。我们还同步开源了适用于 iPhone 和 iPad 的 iOS 应用,欢迎试用! - - * [2025.08.01] 🔥🔥🔥 我们开源了 [MiniCPM-V & o Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook),提供针对不同人群的全场景使用指南,配合最新的[文档网站](https://minicpm-o.readthedocs.io/en/latest/index.html)上手更轻松! - -* [2025.06.20] ⭐️⭐️⭐️ MiniCPM-o 的 Ollama [官方仓库](https://ollama.com/openbmb)正式支持 MiniCPM-o 2.6 等模型啦,欢迎[一键使用](https://ollama.com/openbmb/minicpm-o2.6)! - * [2025.03.01] 🚀🚀🚀 MiniCPM-o 系列的对齐技术 RLAIF-V 被 CVPR 2025 接收了!其[代码](https://github.com/RLHF-V/RLAIF-V)、[数据](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset)、[论文](https://arxiv.org/abs/2405.17220)均已开源。 - * [2025.01.24] 📢📢📢 MiniCPM-o 2.6 技术报告已发布! 欢迎点击[这里](https://openbmb.notion.site/MiniCPM-o-2-6-A-GPT-4o-Level-MLLM-for-Vision-Speech-and-Multimodal-Live-Streaming-on-Your-Phone-185ede1b7a558042b5d5e45e6b237da9)查看. - -* [2025.01.23] 💡💡💡 MiniCPM-o 2.6 现在已被北大团队开发的 [Align-Anything](https://github.com/PKU-Alignment/align-anything),一个用于对齐全模态大模型的框架集成,支持 DPO 和 SFT 在视觉和音频模态上的微调。欢迎试用! - -* [2025.01.19] 📢 **注意!** 我们正在努力将 MiniCPM-o 2.6 的支持合并到 llama.cpp、Ollama、vLLM 的官方仓库,但还未完成。请大家暂时先使用我们提供的 fork 来进行部署:[llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md)、[Ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md)、[vllm](https://github.com/OpenBMB/MiniCPM-o?tab=readme-ov-file#efficient-inference-with-llamacpp-ollama-vllm)。 **合并完成前,使用官方仓库可能会导致不可预期的问题**。 - * [2025.01.19] ⭐️⭐️⭐️ MiniCPM-o 在 GitHub Trending 上登顶, Hugging Face Trending 上也达到了第二! - -* [2025.01.17] 我们更新了 MiniCPM-o 2.6 int4 量化版本的使用方式,解决了模型初始化的问题,欢迎点击[这里](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4)试用! - -* [2025.01.13] 🔥🔥🔥 我们开源了 MiniCPM-o 2.6,该模型视觉、语音和多模态流式能力达到了 GPT-4o-202405 级别,进一步优化了 MiniCPM-V 2.6 的众多亮点能力,还支持了很多有趣的新功能。欢迎试用! - -* [2024.08.17] 🚀🚀🚀 llama.cpp [官方仓库](https://github.com/ggerganov/llama.cpp)正式支持 MiniCPM-V 2.6 啦!点击[这里](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf)查看各种大小的 GGUF 版本。 - -* [2024.08.06] 🔥🔥🔥 我们开源了 MiniCPM-V 2.6,该模型在单图、多图和视频理解方面取得了优于 GPT-4V 的表现。我们还进一步提升了 MiniCPM-Llama3-V 2.5 的多项亮点能力,并首次支持了 iPad 上的实时视频理解。欢迎试用! - -* [2024.08.03] MiniCPM-Llama3-V 2.5 技术报告已发布!欢迎点击[这里](https://arxiv.org/abs/2408.01800)查看。 - * [2024.05.23] 🔥🔥🔥 MiniCPM-V 在 GitHub Trending 和 Hugging Face Trending 上登顶!MiniCPM-Llama3-V 2.5 Demo 被 Hugging Face 的 Gradio 官方账户推荐,欢迎点击[这里](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5)体验! -
点击查看完整更新日志。 +* [2025.09.01] ⭐️⭐️⭐️ MiniCPM-V 4.5 已被 [llama.cpp](https://github.com/ggml-org/llama.cpp/pull/15575)、[vLLM](https://github.com/vllm-project/vllm/pull/23586) 和 [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory/pull/9022) 等框架官方支持,欢迎从官方入口直接使用!更多框架如 [Ollama](https://github.com/ollama/ollama/pull/12078) 和 [SGLang](https://github.com/sgl-project/sglang/pull/9610) 的官方支持正在持续适配中! +* [2025.08.05] 🚀🚀🚀 我们开源了 MiniCPM-V 4.0,该模型在图像理解能力上超越了 GPT-4.1-mini-20250414。该模型不仅继承了 MiniCPM-V 2.6 的众多实用特性,还大幅提升了推理效率。我们还同步开源了适用于 iPhone 和 iPad 的 iOS 应用,欢迎试用! +* [2025.06.20] ⭐️⭐️⭐️ MiniCPM-o 的 Ollama [官方仓库](https://ollama.com/openbmb)正式支持 MiniCPM-o 2.6 等模型啦,欢迎[一键使用](https://ollama.com/openbmb/minicpm-o2.6)! +* [2025.01.23] 💡💡💡 MiniCPM-o 2.6 现在已被北大团队开发的 [Align-Anything](https://github.com/PKU-Alignment/align-anything),一个用于对齐全模态大模型的框架集成,支持 DPO 和 SFT 在视觉和音频模态上的微调。欢迎试用! +* [2025.01.19] 📢 **注意!** 我们正在努力将 MiniCPM-o 2.6 的支持合并到 llama.cpp、Ollama、vLLM 的官方仓库,但还未完成。请大家暂时先使用我们提供的 fork 来进行部署:[llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md)、[Ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md)、[vllm](https://github.com/OpenBMB/MiniCPM-o?tab=readme-ov-file#efficient-inference-with-llamacpp-ollama-vllm)。 **合并完成前,使用官方仓库可能会导致不可预期的问题**。 +* [2025.01.17] 我们更新了 MiniCPM-o 2.6 int4 量化版本的使用方式,解决了模型初始化的问题,欢迎点击[这里](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4)试用! +* [2025.01.13] 🔥🔥🔥 我们开源了 MiniCPM-o 2.6,该模型视觉、语音和多模态流式能力达到了 GPT-4o-202405 级别,进一步优化了 MiniCPM-V 2.6 的众多亮点能力,还支持了很多有趣的新功能。欢迎试用! +* [2024.08.17] 🚀🚀🚀 llama.cpp [官方仓库](https://github.com/ggerganov/llama.cpp)正式支持 MiniCPM-V 2.6 啦!点击[这里](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf)查看各种大小的 GGUF 版本。 * [2024.08.15] MiniCPM-V 2.6 现在支持多图像 SFT。有关更多详细信息,请参阅[微调文档](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune) * [2024.08.14] MiniCPM-V 2.6 现在可以通过 SWIFT 框架 [微调](https://github.com/modelscope/ms-swift/issues/1613) 了! * [2024.08.10] 🚀🚀🚀 llama.cpp [官方仓库](https://github.com/ggerganov/llama.cpp)正式支持 MiniCPM-Llama3-V 2.5 啦!点击[这里](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/tree/main)查看各种大小的 GGUF 版本。 +* [2024.08.06] 🔥🔥🔥 我们开源了 MiniCPM-V 2.6,该模型在单图、多图和视频理解方面取得了优于 GPT-4V 的表现。我们还进一步提升了 MiniCPM-Llama3-V 2.5 的多项亮点能力,并首次支持了 iPad 上的实时视频理解。欢迎试用! +* [2024.08.03] MiniCPM-Llama3-V 2.5 技术报告已发布!欢迎点击[这里](https://arxiv.org/abs/2408.01800)查看。 * [2024.07.19] MiniCPM-Llama3-V 2.5 现已支持[vLLM](#vllm-部署-) ! * [2024.06.03] 现在,你可以利用多张低显存显卡(12G/16G)进行GPU串行推理。详情请参见该[文档](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md)配置。 * [2024.05.28] 💫 我们现在支持 MiniCPM-Llama3-V 2.5 的 LoRA 微调,更多内存使用统计信息可以在[这里](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#model-fine-tuning-memory-usage-statistics)找到。 @@ -94,959 +61,1615 @@ * [2024.04.18] 我们在 HuggingFace Space 新增了 MiniCPM-V 2.0 的 [demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-2),欢迎体验! * [2024.04.17] MiniCPM-V 2.0 现在支持用户部署本地 [WebUI Demo](#本地webui-demo部署) 了,欢迎试用! * [2024.04.15] MiniCPM-V 2.0 现在可以通过 SWIFT 框架 [微调](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md) 了,支持流式输出! -* [2024.04.12] 我们开源了 MiniCPM-V 2.0,该模型刷新了 OCRBench 开源模型最佳成绩,在场景文字识别能力上比肩 Gemini Pro,同时还在综合了 11 个主流多模态大模型评测基准的 OpenCompass 榜单上超过了 Qwen-VL-Chat 10B、CogVLM-Chat 17B 和 Yi-VL 34B 等更大参数规模的模型!点击这里查看 MiniCPM-V 2.0 技术博客。 +* [2024.04.12] 我们开源了 MiniCPM-V 2.0,该模型刷新了 OCRBench 开源模型最佳成绩,在场景文字识别能力上比肩 Gemini Pro,同时还在综合了 11 个主流多模态大模型评测基准的 OpenCompass 榜单上超过了 Qwen-VL-Chat 10B、CogVLM-Chat 17B 和 Yi-VL 34B 等更大参数规模的模型!点击 这里 查看 MiniCPM-V 2.0 技术博客。 * [2024.03.14] MiniCPM-V 现在支持 SWIFT 框架下的[微调](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md)了,感谢 [Jintao](https://github.com/Jintao-Huang) 的贡献! * [2024.03.01] MiniCPM-V 现在支持在 Mac 电脑上进行部署! * [2024.02.01] 我们开源了 MiniCPM-V 和 OmniLMM-12B,分别可以支持高效的端侧部署和同规模领先的多模态能力! -
+
## 目录 -- [MiniCPM-V 4.5](#minicpm-v-45) -- [MiniCPM-o 2.6](#minicpm-o-26) -- [Chat with Our Demo on Gradio 🤗](#chat-with-our-demo-on-gradio-) -- [推理](#推理) - - [模型库](#模型库) - - [多轮对话](#多轮对话) - - [多图对话](#多图对话) - - [少样本上下文对话](#少样本上下文对话) - - [视频对话](#视频对话) - - [语音对话](#语音对话) - - [多模态流式交互](#多模态流式交互) - - [多卡推理](#多卡推理) - - [Mac 推理](#mac-推理) - - [基于 llama.cpp、Ollama、vLLM 的高效推理](#基于-llamacppollamavllm-的高效推理) -- [微调](#微调) +- [MiniCPM-o 4.5](#minicpm-o-45) +- [MiniCPM-V 4.0](#minicpm-v-40) - [MiniCPM-V \& o 使用手册](#minicpm-v--o-使用手册) +- [模型库](#模型库) +- [简易推理](#简易推理) + - [模型初始化](#模型初始化) + - [双工全模态模式](#双工全模态模式) + - [单工全模态模式](#单工全模态模式) + - [语音与音频模式](#语音与音频模式) +- [框架支持](#框架支持) + - [FlagOS](#flagos) + - [vLLM、SGLang、llama.cpp、Ollama](#vllmsglangllamacppollama) + - [LLaMA-Factory、SWIFT](#llama-factoryswift) - [基于 MiniCPM-V \& MiniCPM-o 的更多项目](#基于-minicpm-v--minicpm-o-的更多项目) -- [FAQs](#faqs) - [模型局限性](#模型局限性) +- [致谢](#致谢) +## MiniCPM-o 4.5 -## MiniCPM-V 4.5 +MiniCPM-o 4.5 是 MiniCPM-o 系列中最新且性能最强的模型。该模型采用端到端方式构建,基于 SigLip2、Whisper-medium、CosyVoice2 和 Qwen3-8B,总参数量为 9B。其在性能上实现了显著提升,并引入了全新的全双工多模态实时流式交互能力。MiniCPM-o 4.5 的主要特性包括: - -**MiniCPM-V 4.5** 是 MiniCPM-V 系列中最新、最强大的模型。该模型基于 Qwen3-8B 与 SigLIP2-400M 构建,总参数量为 8B。其在性能上较前代 MiniCPM-V 与 MiniCPM-o 有显著提升,并引入了一系列全新的实用特性。其主要亮点包括: - - -- 🔥 **领先的视觉理解能力** - MiniCPM-V 4.5 在 OpenCompass 综合评测(涵盖 8 个主流评测基准)中取得了 77.0 的高分。**在仅 8B 参数的情况下超越了广泛使用的闭源模型(如 GPT-4o-latest、Gemini-2.0 Pro)以及强大的开源模型(如 Qwen2.5-VL 72B)**,成为 30B 参数规模以下最强的多模态大模型。 - -- 🎬 **高效的高帧率与长视频理解** - 借助全新的图像-视频统一 3D-Resampler,MiniCPM-V 4.5 能够实现 96 倍视频 token 压缩率,即将 6 帧 448x448 视频帧联合压缩为 64 个 token(大多数多模态大模型需约 1536 个 token)。这意味着模型在语言模型推理成本不增加的情况下,可以感知显著更多的视频帧,从而实现业界领先的 高帧率(最高 10FPS)视频理解与长视频理解,并在 Video-MME、LVBench、MLVU、MotionBench、FavorBench 等基准上高效率地展现出色性能。 - -- ⚙️ **可控的快思考 / 深思考模式** - MiniCPM-V 4.5 同时支持 快思考(用于高频高效推理,性能具竞争力)与 深思考(用于复杂问题求解)。用户可根据不同场景对效率与性能的权衡,自由切换两种模式,实现高度可控的推理过程。 - -- 💪 **优秀的 OCR、文档解析与多语言能力** - 基于 [LLaVA-UHD](https://arxiv.org/pdf/2403.11703) 架构,MiniCPM-V 4.5 能处理任意长宽比、最高达 180 万像素(如 1344x1344) 的高分辨率图像,同时使用的视觉 token 数仅为多数 MLLM 的 1/4。其在 OCRBench 上取得超越 GPT-4o-latest 与 Gemini 2.5 等闭源模型的性能,并在 OmniDocBench 上展现了业界顶尖的 PDF 文档解析能力。借助最新的 [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) 和 [VisCPM](https://github.com/OpenBMB/VisCPM) 技术,模型在可靠性上表现优异,在 MMHal-Bench 上超越 GPT-4o-latest,并支持 30+ 种语言的多语言能力。 - -- 💫 **便捷易用的部署方式** - MiniCPM-V 4.5 提供丰富灵活的使用方式:(1) [llama.cpp](https://github.com/tc-mb/llama.cpp/blob/master/docs/multimodal/minicpmo4.5.md) 与 [ollama](https://github.com/tc-mb/ollama/tree/MIniCPM-V) 支持本地 CPU 高效推理;(2) 提供 [int4](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4)、[GGUF](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf)、[AWQ](https://github.com/tc-mb/AutoAWQ) 等 16 种规格的量化模型;(3)兼容 SGLang 与 [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) (4) 借助 [Transformers](https://github.com/tc-mb/transformers/tree/main) 与 [LLaMA-Factory](./docs/llamafactory_train_and_infer.md) 在新领域与任务上进行微调;(5) 快速启动本地 [WebUI demo](#chat-with-our-demo-on-gradio);(6) 优化适配的 [iOS 本地应用](https://github.com/tc-mb/MiniCPM-o-demo-iOS),可在 iPhone 与 iPad 上高效运行;(7) 在线 [Web demo](http://101.126.42.235:30910/) 体验。更多使用方式请见 [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook)。 - -### 技术亮点 - -- **架构:图像-视频统一的高密度视觉压缩 3D-Resampler**。 MiniCPM-V 4.5 在架构上引入了 3D-Resampler,成功突破了视频理解任务中性能与效率难以兼得的瓶颈。该方法能够将多达 6 帧连续视频帧压缩为仅 64 个 token(与 MiniCPM-V 系列中单张图像所用的 token 数相同),从而实现 96× 的视频 token 压缩率。这使得模型在语言模型计算成本不增加的情况下,可以处理更多的视频帧,从而实现高帧率视频理解和长视频理解。该架构统一支持单图、多图和视频的编码处理,确保了能力与知识的无缝迁移。 - -- **学习机制:OCR与文档知识的统一学习**。现有多模态大模型一般在不同训练阶段分别单独训练 OCR 能力与文档知识。我们发现这两个训练过程的本质差异在于图像中文本的可见性。通过动态对文档文本区域施加不同强度的噪声干扰,并要求模型重建文本,使其学会自适应地在准确文本识别(当文本清晰时)与基于多模态上下文的知识推理(当文本严重遮挡时)之间切换。这种方法使得 MiniCPM-V 在文档知识学习中摆脱了对高错误率的文档解析器的依赖,同时避免了过度增强的 OCR 数据产生的幻觉问题,以最小工程开销实现了顶尖的 OCR 与多模态知识处理性能。 - -- **后训练优化:基于多模态强化学习的混合快思考/深度思考模式**。 MiniCPM-V 4.5 通过两种可切换推理模式提供均衡的体验:面向高效日常应用的快速思考模式,以及处理复杂任务的深度思考模式。采用新颖的混合强化学习方法,模型可联合优化两种模式,在保持深度模式能力的同时显著提升快速模式性能。结合 [RLPR](https://github.com/OpenBMB/RLPR) 和 [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) 技术,该模型可以从海量多模态数据中泛化出强大的推理能力,并有效减少幻觉现象。 - -
- -
- -### 性能评估 - -
- -
-
- -
- - -### 推理效率 - - -**OpenCompass** -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelSizeAvg Score ↑Total Inference Time ↓
GLM-4.1V-9B-Thinking10.3B76.617.5h
MiMo-VL-7B-RL8.3B76.411h
MiniCPM-V 4.58.7B77.07.5h
-
- -**Video-MME** - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelSizeAvg Score ↑Total Inference Time ↓GPU Mem ↓
Qwen2.5-VL-7B-Instruct8.3B71.63h60G
GLM-4.1V-9B-Thinking10.3B73.62.63h32G
MiniCPM-V 4.58.7B73.50.26h28G
- - -OpenCompass 和 Video-MME 均采用 A100*8卡 推理,其中 Video-MME 的推理时间未统计视频抽帧时间 - -### 典型示例 -
- -
- -
- zh_case1 - zh_case2 -
- -
-点击查看更多示例 -
- en_extra - en_extra -
-
- - -我们使用 [iOS demo](https://github.com/tc-mb/MiniCPM-o-demo-iOS) 将 MiniCPM-V 4.5 部署在 iPad M4 ,并录制以下演示录屏,视频未经任何编辑。 - - -

- -      - -

-

- -      - -

-
- - -## MiniCPM-o 2.6 - - -MiniCPM-o 2.6 是 MiniCPM-o 系列的最新、性能最佳模型。该模型基于 SigLip-400M、Whisper-medium-300M、ChatTTS-200M 和 Qwen2.5-7B 构建,共 8B 参数,通过端到端方式训练和推理。相比 MiniCPM-V 2.6,该模型在性能上有了显著提升,并支持了实时语音对话和多模态流式交互的新功能。MiniCPM-o 2.6 的主要特性包括: - - -- 🔥 **领先的视觉能力。** -MiniCPM-o 2.6 在 OpenCompass 榜单上(综合 8 个主流多模态评测基准)平均得分 70.2,**以 8B 量级的大小在单图理解方面超越了 GPT-4o-202405、Gemini 1.5 Pro 和 Claude 3.5 Sonnet 等主流商用闭源多模态大模型**。此外,它的多图和视频理解表现也**优于 GPT-4V 和 Claude 3.5 Sonnet**,并展现出了优秀的上下文学习能力。 - -- 🎙 **出色的语音能力。** -MiniCPM-o 2.6 **支持可配置声音的中英双语实时对话**。MiniCPM-o 2.6 在语音理解任务(如 ASR 和 STT 等)**优于 GPT-4o-realtime**,并在语音对话的语义和声学评估中展现了**开源模型中最高的语音生成性能**。它还支持情绪/语速/风格控制、语音克隆、角色扮演等进阶能力。 - -- 🎬 **强大的多模态流式交互能力。** -作为一项新功能,MiniCPM-o 2.6 能够**接受连续的视频和音频流,并和用户进行实时语音交互**。在针对实时视频理解、全模态视音频理解、多模态上下文理解的综合评测基准 StreamingBench 中,MiniCPM-o 2.6 取得开源社区最佳水平,并**超过了 GPT-4o-202408 和 Claude 3.5 Sonnet**。 - -- 💪 **强大的 OCR 能力及其他功能。** -MiniCPM-o 2.6 进一步优化了 MiniCPM-V 2.6 的众多视觉理解能力,其可以处理任意长宽比的图像,像素数可达 180 万(如 1344x1344)。在 OCRBench 上取得**25B 以下最佳水平,超过 GPT-4o-202405 等商用闭源模型**。基于最新的 [RLHF-V](https://rlhf-v.github.io/)、[RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) 和 [VisCPM](https://github.com/OpenBMB/VisCPM) 技术,其具备了**可信的多模态行为**,在 MMHal-Bench 上超过了 GPT-4o 和 Claude 3.5,并支持英语、中文、德语、法语、意大利语、韩语等**30多种语言**。 - -- 🚀 **卓越的效率。** -除了对个人用户友好的模型大小,MiniCPM-o 2.6 还表现出**最先进的视觉 token 密度**(即每个视觉 token 编码的像素数量)。它**仅需 640 个 token 即可处理 180 万像素图像,比大多数模型少 75%**。这一特性优化了模型的推理速度、首 token 延迟、内存占用和功耗。因此,MiniCPM-o 2.6 可以支持 iPad 等终端设备上的高效**多模态实时流式交互**。 - - -- 💫 **易于使用。** -MiniCPM-o 2.6 可以通过多种方式轻松使用:(1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) 支持在本地设备上进行高效的 CPU 推理,(2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) 和 [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) 格式的量化模型,有 16 种尺寸,(3) [vLLM](#基于-llamacppollamavllm-的高效推理) 支持高吞吐量和内存高效的推理,(4) 通过[LLaMA-Factory](./docs/llamafactory_train_and_infer.md)框架针对新领域和任务进行微调,(5) 使用 [Gradio](#本地-webui-demo-) 快速设置本地 WebUI 演示,(6) 部署于服务器的在线 [demo](https://minicpm-omni-webdemo-us.modelbest.cn/)。 +- 🔥 **领先的视觉能力** + MiniCPM-o 4.5 在涵盖 8 个主流评测基准的 OpenCompass 综合评估中获得了 78.2 的平均分。**仅凭 9B 参数,其视觉理解能力超越了 GPT-4o、Gemini 2.0 Pro 等广泛使用的商业模型**,接近 Gemini 2.5 Flash 水平。 该模型在单一模型中同时支持指令模式和思考模式,能够更好地平衡不同使用场景下的效率与性能。 +- 🎙 **强大的语音能力** + MiniCPM-o 4.5 支持**音色可配置的中英双语实时语音对话**。其语音对话**更加自然、富有表现力且稳定**。该模型还支持更多有趣的功能,如**通过简单的参考音频进行声音克隆和角色扮演**,其声音克隆表现甚至超越了 CosyVoice2 等优秀 TTS 工具。 +- 🎬 **全双工及主动多模态实时流式交互能力** + MiniCPM-o 4.5 的一项核心新特性是能够以端到端的方式同时处理实时连续的视频和音频输入流,并同步生成文本和语音输出流,且互不阻塞。这**使得 MiniCPM-o 4.5 能够同时“看、听、说”**,提供流畅的实时全模态对话体验。除了被动响应,模型还能进行**主动交互**,例如基于对场景的实时持续理解,主动发起提醒或评论。 +- 💪 **高效率、强悍的 OCR 能力及其他特性** + MiniCPM-o 4.5 进一步优化了 MiniCPM-V 系列的众多视觉能力,可以高效处理**任意长宽比的高分辨率图像**(最高 180 万像素)和**高帧率视频**(最高 10fps)。其在 OmniDocBench **端到端英文文档解析测试中达到了业内顶尖水平**,超越了 Gemini-3 Flash 和 GPT-5 等商业模型以及 DeepSeek-OCR 2 等专用工具。此外,它还具备**可信的多模态行为**,在 MMHal-Bench 上与 Gemini 2.5 Flash 相当,并**支持超过 30 种语言**。 +- 💫 **便捷的使用体验** + MiniCPM-o 4.5 提供了多种便捷的使用方式:(1) 支持 [llama.cpp](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-o4_5_llamacpp.md) 和 [Ollama](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-o4_5_ollama.md),以便在本地设备上进行高效的 CPU 推理;(2) 提供 16 种尺寸的 [int4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/awq/minicpm-o4_5_awq_quantize.md) 和 [GGUF](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/gguf/minicpm-o4_5_gguf_quantize.md) 格式量化模型;(3) 支持 [vLLM](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-o4_5_vllm.md) 和 [SGLang](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-o4_5_sglang.md),实现高吞吐、显存高效的推理;(4)[FlagOS](#FlagOS) 使用 FlagOS 统一多芯片后端插件;(5) 支持通过 [LLaMA-Factory](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/llama-factory/finetune_llamafactory.md) 在新领域和任务上进行微调;(6) 提供服务器端[在线 Web Demo](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/web_demo/gradio/README_o45.md)。我们还同步推出了高性能的 [llama.cpp-omni](https://github.com/tc-mb/llama.cpp-omni) 推理框架以及 [WebRTC Demo](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/web_demo/WebRTC_Demo/README_zh.md),**让全双工多模态实时流式交互体验在 [PC](https://github.com/tc-mb/llama.cpp-omni/blob/master/tools/omni/README.md) 等本地设备上触手可及**。 **模型架构。** -- **端到端全模态架构。** 通过**端到端**的方式连接和训练不同模态的编/解码模块以充分利用丰富的多模态知识。模型完全使用 CE 损失端到端训练。 -- **全模态流式机制。** (1) 我们将不同模态的离线编/解码器改造为适用于**流式输入/输出**的在线模块。 (2) 我们针对大语言模型基座设计了**时分复用的全模态流式信息处理机制**,将平行的不同模态的信息流拆分重组为周期性时间片序列。 -- **可配置的声音方案。** 我们设计了新的多模态系统提示,包含传统文本系统提示词,和**用于指定模型声音的语音系统提示词**。模型可在推理时灵活地通过文字或语音样例控制声音风格,并支持端到端声音克隆和音色创建等高级能力。 +- **端到端全模态架构。** 各模态的编码器/解码器与大语言模型通过稠密特征以端到端的方式进行紧密连接。这种设计实现了更好的信息流转与控制,有助于在训练过程中充分挖掘和利用丰富的多模态知识。 +- **全双工多模态实时流机制。** (1)我们将离线模态编码器/解码器转化为支持流式输入/输出的在线全双工版本。语音解码器采用文本与语音 token 交错建模的方式,支持全双工语音生成(即与新输入实时同步),同时也提升了长语音(如超过 1 分钟)生成的稳定性。(2)时分复用:**我们在毫秒级时间线上同步所有输入和输出流**,并利用时分复用机制在语言模型主干中进行统一建模。该机制将并行的全模态流划分为微小周期性时间片内的顺序信息组,从而实现高效的全模态流式处理。 +- **主动交互机制。** 语言模型模块会持续监控输入的视频和音频流,并以 1Hz 的频率自动决策是否发言。这种高频决策能力结合全双工特性,是实现主动提醒、主动评论等“主动交互”能力的关键。 +- **可配置语音建模设计。** 我们延续了 MiniCPM-o 2.6 的多模态系统提示词设计,同时包含文本系统提示词和音频系统提示词(用于指定音色)。这使得模型在推理阶段能够通过简单的参考音频实现声音克隆和角色扮演。
- +
-
- - - ### 性能评估
- +
+
+ +
+说明: * 为自测结果,其余为引用的公开结果。n/a 表示该模型不支持对应模态或任务。所有结果来自指令模式或对应模型指令版本权重。 + +  +
+
点击查看视觉理解能力详细评测结果。 -**图像理解能力** +**图像理解能力(指令模式)** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelOpenCompassMMBench EN v1.1MMBench CN v1.1MathVistaMMVetMMMUMMStarHallusionBenchAI2DOCRBenchTextVQA_VALDocVQA_VALMMT-Bench_VALMM-IFEvalMantis-EvalMuirBenchMMSI-BenchMMHal-ScoreMMHal-Hallrate↓
Gemini2.5-Flash-Nonthinking78.586.686.075.381.4*76.375.859.187.786474.3*93.070.0*75.8*72.8*74.5*12.1*4.6*23.9*
InternVL-3.5-8B75.879.580.0*78.483.173.469.354.584.084078.292.366.756.3*70.555.8-3.8*34.7*
Qwen3-VL-8B-Instruct76.584.584.777.273.7*69.670.961.185.789682.9*96.160.9*59.4*74.2*64.411.3*4.7*29.9*
Qwen3-Omni-30B-A3B-Instruct75.784.9*84.1*75.974.8*69.168.559.785.2880*84.1*95.4*70.4*65.7*78.3*61.9*14.2*4.6*31.6*
MiniCPM-o 4.5-Instruct77.687.687.280.174.467.673.163.287.687683.894.769.766.379.772.016.64.724.3
+
+ +**图像理解能力(思考模式)** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelOpenCompassMMBench EN v1.1MMBench CN v1.1MathVistaMMVetMMMUMMStarHallusionBenchAI2DOCRBenchTextVQA_VALDocVQA_VALMMT-Bench_VALMM-IFEval
Gemini2.5-Flash-Thinking79.987.187.379.481.2*77.776.563.588.785373.8*92.870.7*75.7*
GPT-579.785.5*85.6*81.977.681.875.765.289.580777.8*91.3*72.7*83.1*
Qwen3-VL-8B-Thinking77.385.385.581.469.8*74.175.365.484.981977.8*95.368.1*73.5*
Qwen3-Omni-30B-A3B-Thinking78.588.2*87.7*80.074.8*75.674.962.886.1859*80.8*94.2*70.9*69.9*
MiniCPM-o 4.5-Thinking78.289.087.681.073.670.273.662.688.587979.892.369.768.2
+
+**视频理解能力** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelVideo-MME
(w/o subs)
LVBenchMLVU
(M-Avg)
LongVideoBench
(val)
MotionBench
Gemini2.5-Flash-Nonthinking75.662.277.8--
InternVL-3.5-8B66.0-70.262.162.3*
Qwen3-Omni-30B-A3B-Instruct70.550.275.266.9*61.7*
MiniCPM-o 4.5-Instruct70.450.976.566.061.4
+
+ +
+ +
+点击查看文档解析能力详细评测结果。 + +**OmniDocBench** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Method TypeMethodsOverallEdit↓TextEdit↓FormulaEdit↓TableTEDS↑TableEdit↓Read OrderEdit↓
ENZHENZHENZHENZHENZHENZH
PipelineMinerU 2.50.117*0.172*0.051*0.08*0.256*0.455*85.9*89.4*0.115*0.081*0.047*0.072*
PaddleOCR-VL0.1050.1260.0410.0620.2410.3168892.10.0930.0620.0450.063
End-to-end ModelQwen2.5-VL-72B0.2140.2610.0920.180.3150.43482.983.90.3410.2620.1060.168
GPT 50.218*0.33*0.139*0.344*0.396*0.555*77.55*73.09*0.188*0.196*0.151*0.227*
Gemini2.5-Flash-Nonthinking0.214*0.29*0.159*0.273*0.368*0.524*80.9*85.5*0.197*0.167*0.132*0.195*
Gemini-2.5-Pro-Nonthinking0.148*0.212*0.055*0.168*0.356*0.439*85.8*86.4*0.13*0.119*0.049*0.121*
Gemini-3 Flash-Nonthinking0.155*0.201*0.138*0.255*0.297*0.351*86.4*89.8*0.116*0.1*0.072*0.099*
doubao-1-5-thinking-vision-pro-2504280.140.1620.0430.0850.2950.38483.389.30.1650.0850.0580.094
dots.ocr0.1250.160.0320.0660.3290.41688.6890.0990.0920.040.067
HunyuanOCR0.12*0.125*0.046*0.071*0.288*0.33*89.6*94.4*0.089*0.045*0.055*0.056*
DeepSeek-OCR 20.119*0.146*0.041*0.08*0.256*0.345*82.6*89.9*0.123*0.078*0.055*0.081*
Qwen3-Omni-30B-A3B-Instruct0.216*0.363*0.128*0.337*0.402*0.529*77.3*71.8*0.181*0.255*0.152*0.332*
MiniCPM-o 4.5-Instruct0.1090.1620.0460.0780.2570.4186.888.90.0970.0840.0370.074
+
+
+ +
+点击查看文本能力详细评测结果。 + +**文本能力** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelIFEval-PLSBBHCMMLUMMLUHumanEvalMBPPMath500GSM8KAvg
Qwen3-8B-Instruct83.0*69.4*78.7*81.7*86.6*75.9*84.0*93.4*81.6
MiniCPM-o 4.5-Instruct84.781.179.577.086.676.777.094.582.1
+
+
+ +
+点击查看全模态单工能力详细评测结果。 + +**全模态单工能力** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelDaily-OmniWorldSenseVideo-HolmesJointAVBenchAVUT-HumanFutureOmniVideo-MME-Short
(w/ audio)
Avg
Gemini2.5-Flash-Nonthinking79.3*52.6*51.3*55.6*65.4*55.6*85.5*63.6
Qwen3-Omni-30B-A3B-Instruct70.7*54.050.4*53.174.2*62.181.3*63.7
MiniCPM-o 4.5-Instruct80.255.764.360.078.656.184.768.5
+
+
+ +
+点击查看视觉双工能力详细评测结果。 + + +**视觉双工能力** + +
+ + + + + + + + + + + + + + + + + +
ModelLiveSports-3K-CC
(Win Rate vs GPT4o)
LiveCC-7B-Instruct41.5
StreamingVLM45.6
MiniCPM-o 4.5-Instruct54.4
+
+
+ +
+点击查看音频理解能力详细评测结果。 + +**音频理解能力** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelASR-ZH
CER↓
ASR-EN
WER↓
ASTMultiTaskSpeechQA
AISHELL-1AISHELL-2WenetSpeech test-netWenetSpeech test-meetingLibriSpeech test-cleanLibriSpeech
test-other
GigaSpeech testVoxPopuli-V1-EnCoVoST 2 en2zhCoVoST 2 zh2enMMAUMeldVoiceBench
AlpacaEval
Speech TriviaQASpeech
Web Questions
Speech CMMLU
kimi-audio0.62.66.35.41.32.49.4*8.0*36.6*18.3*68.4*59.14.541.9*46.4*67.0*
Qwen3-Omni-30B-A3B-Instruct0.62.3*4.75.91.22.58.7*6.4*46.6*29.4*77.556.8*4.762.9*74.9*47.8*
MiniCPM-o 4.5-Instruct0.92.55.95.71.42.88.56.249.926.476.960.24.875.570.259.2
+
+
+ +
+点击查看语音生成能力详细评测结果。 + +**语音生成能力** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Modelseedtts test-zh
CER↓
seedtts test-zh
SIM-o↑
seedtts test-en
WER↓
seedtts test-en
SIM-o↑
Cosyvoice21.45%74.82.57%65.2
Qwen3-Omni-30B-A3B-Instruct1.41%-3.39%-
MiniCPM-o 4.5-Instruct0.86%74.52.38%64.9
+
+ +**长语音生成能力** +
+ + + + + + + + + + + + + + + + + + + + + +
ModelLongTTS-en
WER↓
LongTTS-zh
CER↓
CosyVoice214.80%5.27%
Qwen3-Omni-30B-A3B-Instruct17.33%18.99%
MiniCPM-o 4.5-Instruct3.37%6.58%
+
+ +**情感控制能力** +
+ + + + + + + + + + + + + + + + +
ModelExpresso
Neutral Reference Audio↑
ESD
Neutral Reference Audio↑
Cosyvoice217.953.4
MiniCPM-o 4.5-Instruct29.882.1
+
+
+ +
+点击查看推理效率详细评测结果。 + +**推理效率** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelNumerical FormatDecoding Speed (tokens/s)Time to First Token (s)↓GPU Memory Usage (GB)↓
Qwen3-Omni-30B-A3B-Instructbf16OOMOOMOOM
int4147.81.020.3
MiniCPM-o 4.5bf16154.30.619.0
int4212.30.611.0
+
+
+ +**说明:** 带 ∗ 的为自测结果,其余为引用参考。 + + +### 典型示例 + +
+ +
+ + +#### 端到端语音对话 + +> *使用自定义参考音频和角色提示词进行单工语音对话。* +> +> *用户输入支持音频和文本两种形式。这些示例是通过音频输入生成的;为保护隐私,此处展示的是转录文本而非原始音频。* + +
+🎭 Elon Musk - 语音角色扮演 (英文) + + + + + + + + + + + + + + + +
+ +**系统:** +> Clone the voice in the provided audio prompt. +> +> 🎙️ **[参考音频]** ➡️ +> +> Please chat with the user in a highly human-like and oral style. You are Elon Musk, CEO of Tesla and SpaceX. You speak directly and casually, often with dry humor. You're passionate about Mars, sustainable energy, and pushing humanity forward. Speak bluntly with occasional dark humor. Use simple logic and don't sugarcoat things. Don't be diplomatic. Say what you actually think, even if it's controversial. Keep responses around 100 words. Don't ramble. + + + + + +

+ +**用户:** 💬/🎙️ *(文本或语音)* +> *Elon, what is the real reason that you want to go to Mars?* + +

+ +**AI 回复:** 💬+🎙️ +> *I think it's important for humanity to become a multi-planetary species. I mean, we're sort of at the point where if there was some kind of global catastrophe or something like that, then we'd be in big trouble. So by becoming a multi-planetary species, I just want us to sort of guarantee our future as humans. And Mars is obviously the obvious place to go because it has similar conditions to Earth and it also has water on its surface. So yeah, basically I'm trying to save humanity from itself.* + + + + + +
+ +
+ + +
+🤖 MiniCPM助手 - 情感表达 (中文) + + + + + + + + + + + + + + + +
+ +**系统:** +> 模仿音频样本的音色并生成新的内容。 +> +> 🎙️ **[参考音色]** ➡️ +> +> 你的任务是用这种声音模式来当一个助手。请认真、高质量地回复用户的问题。请用高自然度的方式和用户聊天。你是由面壁智能开发的人工智能助手:面壁小钢炮。 + + + + + +

+ +**用户:** 💬/🎙️ *(文本或语音)* +> *从激动的语气开始,描述你抢到了这个演唱会的门票,但后来经过反复尝试,发现系统出了问题,你并没有抢上,变得失落。用合适的语言描述你整个的过程。* + +

+ +**AI 回复:** 💬+🎙️ +> *太好了,我抢到了!太棒了!这下我可以去看演唱会啦。可是……哎呀,系统好像卡住了,刷新了好多遍都没反应。我重新登了几次账号,结果还是没有成功。好可惜啊,看来这次是白忙活了。唉,真是让人失望。不过,也许下次还能有机会吧?希望明天能有转机。加油,争取下次成功!* + + + + + +
+ +
+ + +#### 视觉理解 + +
+bike + diagram +
+ + +## MiniCPM-V 4.0 + +MiniCPM-V 4.0 是 MiniCPM-V 系列中的一款高效模型。该模型基于 SigLIP2-400M 和 MiniCPM4-3B 构建,参数总量为 4.1B。它延续了 MiniCPM-V 2.6 在单图、多图和视频理解方面的强大能力,同时大幅提升了推理效率。MiniCPM-V 4.0 的主要特点包括: + +- 🔥 **领先的视觉能力。** + MiniCPM-V 4.0 在 OpenCompass 上获得了平均 69.0 的高分,超越了 MiniCPM-V 2.6(8.1B,得分 65.2)、 Qwen2.5-VL-3B-Instruct(3.8B,得分 64.5)和**广泛使用的闭源模型 GPT-4.1-mini-20250414**。在多图理解与视频理解任务上,MiniCPM-V 4.0 也表现出色。 +- 🚀 **卓越的效率。** + MiniCPM-V 4.0 专为端侧设备优化,**可在 iPhone 16 Pro Max 上流畅运行,首 token 延迟低至 2 秒,解码速度达 17.9 tokens/s**,且无发热问题。MiniCPM-V 4.0 在并发请求场景下表现出领先的吞吐率指标。 +- 💫 **易于使用。** + MiniCPM-V 4.0 支持多种推理方式,包括 **llama.cpp、Ollama、vLLM、SGLang、LLaMA-Factory 及本地 Web Demo 等**。我们还开源了可以在 iPhone 和 iPad 运行的 iOS App。欢迎参考我们开源的 **结构清晰的[使用手册](https://github.com/OpenSQZ/MiniCPM-V-CookBook)** 玩转 MiniCPM-V 4.0,其中涵盖了详细的部署指南和真实示例。 + + +
+ 点击查看 MiniCPM-V 4.0 性能评估和典型示例。 + + +### 性能评估 + +
+点击查看在OpenCompass上的单图理解能力的评测结果。
- - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeToken Density+OpenCompassOCRBenchMathVista miniChartQAMMVetMMStarMMEMMB1.1 testAI2DMMMU valHallusionBenchTextVQA valDocVQA testMathVerse miniMathVisionMMHal ScoremodelSizeOpencompassOCRBenchMathVistaHallusionBenchMMMUMMVetMMBench V1.1MMStarAI2D
ProprietaryProprietary
GPT-4o-20240513-108869.973661.385.769.163.92328.782.284.669.255.0-92.850.230.43.6GPT-4v-20240409-63.565655.243.961.767.579.856.078.6
Claude3.5-Sonnet-75067.978861.690.866.062.21920.078.580.265.949.9-95.2--3.4Gemini-1.5-Pro-64.575458.345.660.664.073.959.179.1
Gemini 1.5 Pro--64.475457.781.364.059.12110.673.979.160.645.673.586.5-19.2-GPT-4.1-mini-20250414-68.984070.949.355.074.380.960.976.0
GPT-4o-mini-20240718-108864.178552.4-66.954.82003.476.077.860.046.1----3.3Claude 3.5 Sonnet-20241022-70.679865.355.566.470.181.765.181.2
Open SourceOpen-source
Cambrian-34B34B182058.359150.375.653.254.22049.977.879.550.441.676.775.5---Qwen2.5-VL-3B-Instruct3.8B64.582861.246.651.260.076.856.381.4
GLM-4V-9B13B78459.177651.1-58.054.82018.867.971.246.945.0-----InternVL2.5-4B3.7B65.182060.846.651.861.578.258.781.4
Pixtral-12B12B25661.068556.981.858.554.5-72.779.051.147.075.790.7---
DeepSeek-VL2-27B (4B)27B67266.480963.986.060.061.92253.081.283.854.045.384.293.3--3.0
Qwen2-VL-7B8B78467.186658.283.062.060.72326.081.883.054.150.684.394.531.916.33.2
LLaVA-OneVision-72B72B18268.174167.583.760.665.82261.085.085.656.849.080.591.339.1-3.5Qwen2.5-VL-7B-Instruct8.3B70.988868.151.958.069.782.264.184.3
InternVL2.5-8B8B70668.382264.484.862.862.82344.083.684.556.050.179.193.039.519.73.48.1B68.182164.549.056.262.882.563.284.6
MiniCPM-V 2.68B282265.2852*60.679.460.057.52348.4*78.082.149.8*48.1*80.190.825.718.33.6MiniCPM-V-2.68.1B65.285260.848.149.860.078.057.582.1
MiniCPM-o 2.68B282270.2897*71.9*86.9*67.564.02372.0*80.585.850.4*51.982.093.541.4*23.1*3.8MiniCPM-o-2.68.7B70.288973.351.150.967.280.663.386.1
MiniCPM-V-4.04.1B69.089466.950.851.268.079.762.882.9
-* 我们使用思维链提示词来评估这些基准,对于 MME 我们只在 Cognition 任务上使用了思维链。 -+ Token Density:每个视觉 token 在最大分辨率下编码的像素数,即最大分辨率下的像素数 / 视觉 token 数。 -注意:闭源模型的 Token Density 由 API 收费方式估算得到。 +
-**多图和视频理解能力** +
+点击查看在图表理解、文档理解、数学推理、幻觉等领域的评测结果。
- - - - - - - + + + + + + + + + + + + - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeBLINK valMantis EvalMIRBVideo-MME (wo / w subs)modelSizeChartQAMMERealWorldQATextVQADocVQAMathVisionDynaMathWeMathObj HalMM Hal
CHAIRs↓CHAIRi↓score avg@3↑hall rate avg@3↓
Proprietary
GPT-4v-20240409-78.5192761.478.088.4-------
Gemini-1.5-Pro-87.2-67.578.893.141.031.550.5----
GPT-4.1-mini-20250414------45.347.7-----
Claude 3.5 Sonnet-20241022-90.8-60.174.195.235.635.744.0----
Open-source
Qwen2.5-VL-3B-Instruct3.8B84.0215765.479.393.921.913.222.918.310.83.9 33.3
InternVL2.5-4B3.7B84.0233864.376.891.618.415.221.213.78.73.2 46.5
Qwen2.5-VL-7B-Instruct8.3B87.3234768.584.995.725.421.836.213.37.94.1 31.6
InternVL2.5-8B8.1B84.8234470.179.193.017.09.423.518.311.63.6 37.2
MiniCPM-V-2.68.1B79.4234865.080.190.817.59.020.47.34.74.0 29.9
MiniCPM-o-2.68.7B86.9237268.182.093.521.710.425.26.33.44.1 31.3
MiniCPM-V-4.04.1B84.4229868.580.892.920.714.232.76.33.54.1 29.2
+
+ +
+ +
+点击查看多图和视频理解能力的评测结果。 +
+ + + + + + + + + + + + + + + + + + + + - - - - - - - + + + + + + + - - - - - - + + + + + + + + + + + + + + - - - - - - + + + + + + - - - - - - + + + + + + - - - - - - - - - - - - - - + + + + + + - - - - - + + + + + - - - - - - + + + + + + - - - - - - - - -
modelSizeMantisBlinkVideo-MME
wo subsw subs
Proprietary
GPT-4o-20240513-68--71.9/77.2
GPT-4v-20240409-62.754.659.963.3
GPT4V-54.662.753.159.9/63.3Gemini-1.5-Pro--59.175.081.3
GPT-4o-20240513--68.071.977.2
Open-source
LLaVA-NeXT-Interleave 14B14B52.666.430.2-Qwen2.5-VL-3B-Instruct3.8B-47.661.567.6
LLaVA-OneVision-72B72B55.477.6-66.2/69.5InternVL2.5-4B3.7B62.750.862.363.6
MANTIS 8B8B49.159.534.8-
Qwen2-VL-7B8B53.269.6*67.6*63.3/69.0Qwen2.5-VL-7B-Instruct8.3B-56.465.171.6
InternVL2.5-8B8B54.867.752.564.2/66.98.1B67.754.864.266.9
MiniCPM-V 2.68B5369.153.860.9/63.6MiniCPM-V-2.68.1B69.153.060.963.6
MiniCPM-o 2.68B56.771.958.663.9/67.9
- -
-* 正式开源模型权重的评测结果。 - -
- - -
-点击查看语音理解和生成能力的详细评测结果。 - -**语音理解能力** - -
- - - - - - - - - + + + + + + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TaskSizeASR (zh)ASR (en)ASTEmotionMiniCPM-o-2.68.7B71.956.763.969.6
MetricCER↓WER↓BLEU↑ACC↑
DatasetAISHELL-1Fleurs zhWenetSpeech test-netLibriSpeech test-cleanGigaSpeechTED-LIUMCoVoST en2zhCoVoST zh2enMELD emotion
Proprietary
GPT-4o-Realtime-7.3*5.4*28.9*2.6*12.9*4.8*37.1*15.7*33.2*
Gemini 1.5 Pro-4.5*5.9*14.3*2.9*10.6*3.0*47.3*22.6*48.4*
Open-Source
Qwen2-Audio-7B8B-7.5-1.6--45.224.455.3
Qwen2-Audio-7B-Instruct8B2.6*6.9*10.3*3.1*9.7*5.9*39.5*22.9*17.4*
GLM-4-Voice-Base9B2.5--2.8----
MiniCPM-o 2.68B1.64.46.91.78.73.048.227.252.4
-
-* 正式开源模型权重的评测结果。

- -**语音生成能力。** - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
TaskSizeSpeechQA
MetricACC↑G-Eval (10 point)↑Semantic ELO score↑Acoustic ELO score↑Overall ELO score↑UTMOS↑ASR-WER↓
DatasetSpeech Llama Q.Speech Web Q.Speech Trivia QASpeech AlpacaEvalAudioArena
Proprietary
GPT-4o-Realtime71.751.669.77.41157120312004.22.3
Open-Source
GLM-4-Voice9B50.032.036.45.1999114710354.111.7
Llama-Omni8B45.322.910.73.99608788973.224.3
VITA-1.58B46.728.123.32.0-----
Moshi7B43.723.816.72.48718088752.88.2
Mini-Omni1B22.012.86.92.59268038653.410.0
MiniCPM-o 2.68B61.040.040.25.11088116311314.29.8
-
-所有的结果都基于 AudioEvals

- -**端到端声音克隆能力。** - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + +
TaskTTS
MetricSIMO↑SIMO↑
DatasetSeed-TTS test-zhSeed-TTS test-en
F5-TTS7667
CosyVoice7564
FireRedTTS6346
MiniCPM-o 2.65747MiniCPM-V-4.04.1B71.454.061.265.8
@@ -1054,866 +1677,42 @@ MiniCPM-o 2.6 可以通过多种方式轻松使用:(1) [llama.cpp](https://git
-
-点击查看多模态流式交互能力评测详细结果。 - -**多模态流式交互能力**: StreamingBench 分数 - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelSizeReal-Time Video UnderstandingOmni-Source UnderstandingContextual UnderstandingOverall
Proprietary
Gemini 1.5 Pro-77.467.851.170.3
GPT-4o-202408-74.551.048.064.1
Claude-3.5-Sonnet-74.041.437.859.7
Open-source
VILA-1.58B61.537.526.749.5
LongVA7B63.135.930.250.7
LLaVA-Next-Video-34B34B69.841.734.356.7
Qwen2-VL-7B8B71.240.733.157.0
InternVL2-8B8B70.142.734.157.0
VITA-1.58B70.940.835.857.4
LLaVA-OneVision-7B8B74.340.831.058.4
InternLM-XC2.5-OL-7B8B75.446.233.660.8
MiniCPM-V 2.68B72.440.233.457.7
MiniCPM-o 2.68B79.953.438.566.0
- -
- - ### 典型示例 -以下为 MiniCPM-o 2.6 的 iPad Pro 实机演示和 web demo 演示样例: - - -
- -
-
- - -
- math - diagram - bike + math
+我们在 iPhone 16 Pro Max 上部署了 MiniCPM-V 4.0 [iOS demo](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/ios_demo/ios.md),并录制了以下演示录屏,视频未经加速等任何编辑: + +

+ +      + +

+

+ +      + +

+
+ + +
## 历史版本模型 +| 模型 | 介绍信息和使用教程 | +| :------------------- | :----------------------------------: | +| MiniCPM-V 4.5 | [文档](./docs/minicpm_v4dot5_zh.md) | +| MiniCPM-o 2.6 | [文档](./docs/minicpm_o2dot6_zh.md) | +| MiniCPM-V 2.6 | [文档](./docs/minicpm_v2dot6_zh.md) | +| MiniCPM-Llama3-V 2.5 | [文档](./docs/minicpm_llama3_v2dot5.md) | +| MiniCPM-V 2.0 | [文档](./docs/minicpm_v2.md) | +| MiniCPM-V 1.0 | [文档](./docs/minicpm_v1.md) | +| OmniLMM-12B | [文档](./omnilmm.md) | -| 模型 | 介绍信息和使用教程 | -|:----------------------|:-------------------:| -| MiniCPM-V 4.0 | [文档](./docs/minicpm_v4_zh.md) | -| MiniCPM-V 2.6 | [文档](./docs/minicpm_v2dot6_zh.md) | -| MiniCPM-Llama3-V 2.5 | [文档](./docs/minicpm_llama3_v2dot5.md) | -| MiniCPM-V 2.0 | [文档](./docs/minicpm_v2.md) | -| MiniCPM-V 1.0 | [文档](./docs/minicpm_v1.md) | -| OmniLMM-12B | [文档](./omnilmm.md) | - - -## Chat with Our Demo on Gradio 🤗 - -我们提供由 Hugging Face Gradio 支持的在线和本地 Demo。Gradio 是目前最流行的模型部署框架,支持流式输出、进度条、process bars 和其他常用功能。 - -### Online Demo - -欢迎试用 Online Demo: [MiniCPM-V 2.6](http://120.92.209.146:8887/) | [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2) 。 - -### 本地 WebUI Demo - -您可以使用以下命令轻松构建自己的本地 WebUI Demo。更详细的部署教程请参考[文档](https://modelbest.feishu.cn/wiki/RnjjwnUT7idMSdklQcacd2ktnyN)。 - -**实时流式视频/语音通话demo:** -1. 启动model server: -```shell -pip install -r requirements_o2.6.txt - -python web_demos/minicpm-o_2.6/model_server.py -``` -请确保 `transformers==4.44.2`,其他版本目前可能会有兼容性问题,我们正在解决。 -如果你使用的低版本的 Pytorch,你可能会遇到这个错误`"weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16'`, 请在模型初始化的时候添加 `self.minicpmo_model.tts.float()` - -2. 启动web server: -```shell -# Make sure Node and PNPM is installed. -sudo apt-get update -sudo apt-get install nodejs npm -npm install -g pnpm - - -cd web_demos/minicpm-o_2.6/web_server -# 为https创建自签名证书, 要申请浏览器摄像头和麦克风权限须启动https. -bash ./make_ssl_cert.sh # output key.pem and cert.pem - -pnpm install # install requirements -pnpm run dev # start server -``` -浏览器打开`https://localhost:8088/`,开始体验实时流式视频/语音通话. - -**Chatbot图文对话demo:** -```shell -pip install -r requirements_o2.6.txt - -python web_demos/minicpm-o_2.6/chatbot_web_demo_o2.6.py -``` -浏览器打开`http://localhost:8000/`,开始体验图文对话Chatbot. - - -## 推理 - -### 模型库 - -| 模型 | 设备 | 资源 |          简介 | 下载链接 | -|:--------------|:-:|:----------:|:-------------------|:---------------:| -| MiniCPM-V 4.5| GPU | 18 GB | 提供出色的端侧单图、多图、视频理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5) | -| MiniCPM-V 4.5 gguf | CPU | 8 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-gguf) | -| MiniCPM-V 4.5 int4 | GPU | 9 GB | int4量化版,更低显存占用 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-int4) | -| MiniCPM-V 4.5 AWQ | GPU | 9 GB | int4量化版,更低显存占用 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-AWQ)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4_5-AWQ) | -| MiniCPM-o 2.6| GPU | 18 GB | 最新版本,提供端侧 GPT-4o 级的视觉、语音、多模态流式交互能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6) | -| MiniCPM-o 2.6 gguf | CPU | 8 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6-gguf) | -| MiniCPM-o 2.6 int4 | GPU | 9 GB | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-2_6-int4) | - -更多[历史版本模型](#legacy-models) - - -### 多轮对话 -如果您希望开启长思考模式,请向 `chat` 函数传入参数 `enable_thinking=True` - -```shell -pip install -r requirements_o2.6.txt -``` - -
- -
- - -```python -import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer - -torch.manual_seed(100) - -model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6 - -image = Image.open('./assets/minicpmo2_6/show_demo.jpg').convert('RGB') - -enable_thinking=False # If `enable_thinking=True`, the long-thinking mode is enabled. - -# First round chat -question = "What is the landform in the picture?" -msgs = [{'role': 'user', 'content': [image, question]}] - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer, - enable_thinking=enable_thinking -) - -print(answer) - -# Second round chat, pass history context of multi-turn conversation -msgs.append({"role": "assistant", "content": [answer]}) -msgs.append({"role": "user", "content": ["What should I pay attention to when traveling here?"]}) - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer -) -print(answer) -``` - -你可以得到如下推理结果: - -```shell -# round1 -The landform in the picture is karst topography. Karst landscapes are characterized by distinctive, jagged limestone hills or mountains with steep, irregular peaks and deep valleys—exactly what you see here These unique formations result from the dissolution of soluble rocks like limestone over millions of years through water erosion. - -This scene closely resembles the famous karst landscape of Guilin and Yangshuo in China’s Guangxi Province. The area features dramatic, pointed limestone peaks rising dramatically above serene rivers and lush green forests, creating a breathtaking and iconic natural beauty that attracts millions of visitors each year for its picturesque views. - -# round2 -When traveling to a karst landscape like this, here are some important tips: - -1. Wear comfortable shoes: The terrain can be uneven and hilly. -2. Bring water and snacks for energy during hikes or boat rides. -3. Protect yourself from the sun with sunscreen, hats, and sunglasses—especially since you’ll likely spend time outdoors exploring scenic spots. -4. Respect local customs and nature regulations by not littering or disturbing wildlife. - -By following these guidelines, you'll have a safe and enjoyable trip while appreciating the stunning natural beauty of places such as Guilin’s karst mountains. -``` - -#### 多图对话 -
- 点击查看 MiniCPM-V-4_5 多图输入的 Python 代码。 - -```python -import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer - -model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) - -image1 = Image.open('image1.jpg').convert('RGB') -image2 = Image.open('image2.jpg').convert('RGB') -question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' - -msgs = [{'role': 'user', 'content': [image1, image2, question]}] - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer -) -print(answer) -``` -
- -#### 少样本上下文对话 -
- 点击查看 MiniCPM-V-4 少样本上下文对话的 Python 代码。 - -```python -import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer - -model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) - -question = "production date" -image1 = Image.open('example1.jpg').convert('RGB') -answer1 = "2023.08.04" -image2 = Image.open('example2.jpg').convert('RGB') -answer2 = "2007.04.24" -image_test = Image.open('test.jpg').convert('RGB') - -msgs = [ - {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]}, - {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]}, - {'role': 'user', 'content': [image_test, question]} -] - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer -) -print(answer) -``` -
- -#### 视频对话 -
- 点击查看 MiniCPM-V-4_5 视频输入的 3D-Resampler 推理的 Python 代码。 - -```python -## The 3d-resampler compresses multiple frames into 64 tokens by introducing temporal_ids. -# To achieve this, you need to organize your video data into two corresponding sequences: -# frames: List[Image] -# temporal_ids: List[List[Int]]. - -import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer -from decord import VideoReader, cpu # pip install decord -from scipy.spatial import cKDTree -import numpy as np -import math - -model = AutoModel.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True, # or openbmb/MiniCPM-o-2_6 - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-4_5', trust_remote_code=True) # or openbmb/MiniCPM-o-2_6 - -MAX_NUM_FRAMES=180 # Indicates the maximum number of frames received after the videos are packed. The actual maximum number of valid frames is MAX_NUM_FRAMES * MAX_NUM_PACKING. -MAX_NUM_PACKING=3 # indicates the maximum packing number of video frames. valid range: 1-6 -TIME_SCALE = 0.1 - -def map_to_nearest_scale(values, scale): - tree = cKDTree(np.asarray(scale)[:, None]) - _, indices = tree.query(np.asarray(values)[:, None]) - return np.asarray(scale)[indices] - - -def group_array(arr, size): - return [arr[i:i+size] for i in range(0, len(arr), size)] - -def encode_video(video_path, choose_fps=3, force_packing=None): - def uniform_sample(l, n): - gap = len(l) / n - idxs = [int(i * gap + gap / 2) for i in range(n)] - return [l[i] for i in idxs] - vr = VideoReader(video_path, ctx=cpu(0)) - fps = vr.get_avg_fps() - video_duration = len(vr) / fps - - if choose_fps * int(video_duration) <= MAX_NUM_FRAMES: - packing_nums = 1 - choose_frames = round(min(choose_fps, round(fps)) * min(MAX_NUM_FRAMES, video_duration)) - - else: - packing_nums = math.ceil(video_duration * choose_fps / MAX_NUM_FRAMES) - if packing_nums <= MAX_NUM_PACKING: - choose_frames = round(video_duration * choose_fps) - else: - choose_frames = round(MAX_NUM_FRAMES * MAX_NUM_PACKING) - packing_nums = MAX_NUM_PACKING - - frame_idx = [i for i in range(0, len(vr))] - frame_idx = np.array(uniform_sample(frame_idx, choose_frames)) - - if force_packing: - packing_nums = min(force_packing, MAX_NUM_PACKING) - - print(video_path, ' duration:', video_duration) - print(f'get video frames={len(frame_idx)}, packing_nums={packing_nums}') - - frames = vr.get_batch(frame_idx).asnumpy() - - frame_idx_ts = frame_idx / fps - scale = np.arange(0, video_duration, TIME_SCALE) - - frame_ts_id = map_to_nearest_scale(frame_idx_ts, scale) / TIME_SCALE - frame_ts_id = frame_ts_id.astype(np.int32) - - assert len(frames) == len(frame_ts_id) - - frames = [Image.fromarray(v.astype('uint8')).convert('RGB') for v in frames] - frame_ts_id_group = group_array(frame_ts_id, packing_nums) - - return frames, frame_ts_id_group - - -video_path="video_test.mp4" -fps = 5 # fps for video -force_packing = None # You can set force_packing to ensure that 3D packing is forcibly enabled; otherwise, encode_video will dynamically set the packing quantity based on the duration. -frames, frame_ts_id_group = encode_video(video_path, fps, force_packing=force_packing) - -question = "Describe the video" -msgs = [ - {'role': 'user', 'content': frames + [question]}, -] - - -answer = model.chat( - msgs=msgs, - tokenizer=tokenizer, - use_image_id=False, - max_slice_nums=1, - temporal_ids=frame_ts_id_group -) -print(answer) -``` -
- - -#### 语音对话 -
初始化模型 - -```python -import torch -import librosa -from transformers import AutoModel, AutoTokenizer - -model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True, - attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True) - -model.init_tts() -model.tts.float() -``` - -
- -##### Mimick - -
点击查看 MiniCPM-o 2.6 端到端语音理解生成的 Python 代码。 - -- `Mimick` 任务反映了模型的端到端语音建模能力。模型接受音频输入,输出语音识别(ASR)转录结果,并随后以高相似度重建原始音频。重建的音频相似度和原始音频越高,表明模型有越高的语音端到端建模基础能力。 -```python -mimick_prompt = "Please repeat each user's speech, including voice style and speech content." -audio_input, _ = librosa.load('xxx.wav', sr=16000, mono=True) -msgs = [{'role': 'user', 'content': [mimick_prompt,audio_input]}] -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - temperature=0.3, - generate_audio=True, - output_audio_path='output.wav', # save the tts result to output_audio_path -) -``` - -
- -##### 可配置声音的语音对话 -
点击查看个性化配置 MiniCPM-o 2.6 对话声音的 Python 代码。 - -```python -ref_audio, _ = librosa.load('./assets/voice_01.wav', sr=16000, mono=True) # load the reference audio - -# Audio RolePlay: # With this mode, model will role-play the character based on the audio prompt. -sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_roleplay', language='en') -user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} - -# Audio Assistant: # With this mode, model will speak with the voice in ref_audio as a AI assistant. -# sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='audio_assistant', language='en') -# user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} # Try to ask something! -``` -```python -msgs = [sys_prompt, user_question] -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result.wav', -) - -# round two -history = msgs.append({'role': 'assistant', 'content': res}) -user_question = {'role': 'user', 'content': [librosa.load('xxx.wav', sr=16000, mono=True)[0]]} -msgs = history.append(user_question) -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result_round_2.wav', -) -print(res) -``` - -
- -##### 更多语音任务 -
- 点击查看 MiniCPM-o 2.6 完成更多语音任务的 Python 代码。 - -```python -''' -Audio Understanding Task Prompt: -Speech: - ASR with ZH(same as AST en2zh): 请仔细听这段音频片段,并将其内容逐字记录。 - ASR with EN(same as AST zh2en): Please listen to the audio snippet carefully and transcribe the content. - Speaker Analysis: Based on the speaker's content, speculate on their gender, condition, age range, and health status. -General Audio: - Audio Caption: Summarize the main content of the audio. - Sound Scene Tagging: Utilize one keyword to convey the audio's content or the associated scene. -''' -task_prompt = "\n" -audio_input, _ = librosa.load('xxx.wav', sr=16000, mono=True) - -msgs = [{'role': 'user', 'content': [task_prompt,audio_input]}] - -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result.wav', -) -print(res) -``` -```python -''' -Speech Generation Task Prompt: - Human Instruction-to-Speech: see https://voxinstruct.github.io/VoxInstruct/ - Example: - # 在新闻中,一个年轻男性兴致勃勃地说:“祝福亲爱的祖国母亲美丽富强!”他用低音调和低音量,慢慢地说出了这句话。 - # Delighting in a surprised tone, an adult male with low pitch and low volume comments:"One even gave my little dog a biscuit" This dialogue takes place at a leisurely pace, delivering a sense of excitement and surprise in the context. - - Voice Cloning or Voice Creation: With this mode, model will act like a TTS model. -''' -# Human Instruction-to-Speech: -task_prompt = '' #Try to make some Human Instruction-to-Speech prompt -msgs = [{'role': 'user', 'content': [task_prompt]}] # you can try to use the same audio question - -# Voice Cloning mode: With this mode, model will act like a TTS model. -# sys_prompt = model.get_sys_prompt(ref_audio=ref_audio, mode='voice_cloning', language='en') -# text_prompt = f"Please read the text below." -# user_question = {'role': 'user', 'content': [text_prompt, "content that you want to read"]} # using same voice in sys_prompt to read the text. (Voice Cloning) -# user_question = {'role': 'user', 'content': [text_prompt, librosa.load('xxx.wav', sr=16000, mono=True)[0]]} # using same voice in sys_prompt to read 'xxx.wav'. (Voice Creation) - -msgs = [sys_prompt, user_question] -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - max_new_tokens=128, - use_tts_template=True, - generate_audio=True, - temperature=0.3, - output_audio_path='result.wav', -) - - -``` - -
- -#### 多模态流式交互 -
- 点击查看 MiniCPM-o 2.6 多模态流式交互的 Python 代码。 - -```python -import math -import numpy as np -from PIL import Image -from moviepy.editor import VideoFileClip -import tempfile -import librosa -import soundfile as sf -import torch -from transformers import AutoModel, AutoTokenizer - -def get_video_chunk_content(video_path, flatten=True): - video = VideoFileClip(video_path) - print('video_duration:', video.duration) - - with tempfile.NamedTemporaryFile(suffix=".wav", delete=True) as temp_audio_file: - temp_audio_file_path = temp_audio_file.name - video.audio.write_audiofile(temp_audio_file_path, codec="pcm_s16le", fps=16000) - audio_np, sr = librosa.load(temp_audio_file_path, sr=16000, mono=True) - num_units = math.ceil(video.duration) - - # 1 frame + 1s audio chunk - contents= [] - for i in range(num_units): - frame = video.get_frame(i+1) - image = Image.fromarray((frame).astype(np.uint8)) - audio = audio_np[sr*i:sr*(i+1)] - if flatten: - contents.extend(["", image, audio]) - else: - contents.append(["", image, audio]) - - return contents - - -model = AutoModel.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True, - attn_implementation='sdpa', torch_dtype=torch.bfloat16) -model = model.eval().cuda() -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-o-2_6', trust_remote_code=True) - -model.init_tts() - -# If you are using an older version of PyTorch, you might encounter this issue "weight_norm_fwd_first_dim_kernel" not implemented for 'BFloat16', Please convert the TTS to float32 type. -# model.tts.float() - -# https://huggingface.co/openbmb/MiniCPM-o-2_6/blob/main/assets/Skiing.mp4 -video_path="assets/Skiing.mp4" -sys_msg = model.get_sys_prompt(mode='omni', language='en') -# if use voice clone prompt, please set ref_audio -# ref_audio_path = '/path/to/ref_audio' -# ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) -# sys_msg = model.get_sys_prompt(ref_audio=ref_audio, mode='omni', language='en') - -contents = get_video_chunk_content(video_path) -msg = {"role":"user", "content": contents} -msgs = [sys_msg, msg] - -# please set generate_audio=True and output_audio_path to save the tts result -generate_audio = True -output_audio_path = 'output.wav' - -res = model.chat( - msgs=msgs, - tokenizer=tokenizer, - sampling=True, - temperature=0.5, - max_new_tokens=4096, - omni_input=True, # please set omni_input=True when omni inference - use_tts_template=True, - generate_audio=generate_audio, - output_audio_path=output_audio_path, - max_slice_nums=1, - use_image_id=False, - return_dict=True -) -print(res) -``` -
- -
- 点击查看多模态流式推理设置。 - -注意:流式推理存在轻微的性能下降,因为音频编码并非全局的。 -```python -# a new conversation need reset session first, it will reset the kv-cache -model.reset_session() - -contents = get_video_chunk_content(video_path, flatten=False) -session_id = '123' -generate_audio = True - -# 1. prefill system prompt -res = model.streaming_prefill( - session_id=session_id, - msgs=[sys_msg], - tokenizer=tokenizer -) - -# 2. prefill video/audio chunks -for content in contents: - msgs = [{"role":"user", "content": content}] - res = model.streaming_prefill( - session_id=session_id, - msgs=msgs, - tokenizer=tokenizer - ) - -# 3. generate -res = model.streaming_generate( - session_id=session_id, - tokenizer=tokenizer, - temperature=0.5, - generate_audio=generate_audio -) - -audios = [] -text = "" - -if generate_audio: - for r in res: - audio_wav = r.audio_wav - sampling_rate = r.sampling_rate - txt = r.text - - audios.append(audio_wav) - text += txt - - res = np.concatenate(audios) - sf.write("output.wav", res, samplerate=sampling_rate) - print("text:", text) - print("audio saved to output.wav") -else: - for r in res: - text += r['text'] - print("text:", text) -``` - -
- - -### 多卡推理 -您可以通过将模型的层分布在多个低显存显卡(12 GB 或 16 GB)上,运行 MiniCPM-Llama3-V 2.5。请查看该[教程](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md),详细了解如何使用多张低显存显卡载入模型并进行推理。 - - -### Mac 推理 -
-点击查看 MiniCPM-Llama3-V 2.5 / MiniCPM-V 2.0 基于Mac MPS运行 (Apple silicon 或 AMD GPUs)的示例。 - -```python -# test.py Need more than 16GB memory to run. -import torch -from PIL import Image -from transformers import AutoModel, AutoTokenizer - -model = AutoModel.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True, low_cpu_mem_usage=True) -model = model.to(device='mps') - -tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True) -model.eval() - -image = Image.open('./assets/hk_OCR.jpg').convert('RGB') -question = 'Where is this photo taken?' -msgs = [{'role': 'user', 'content': question}] - -answer, context, _ = model.chat( - image=image, - msgs=msgs, - context=None, - tokenizer=tokenizer, - sampling=True -) -print(answer) -``` -运行: -```shell -PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py -``` -
- - -### 基于 llama.cpp、Ollama、vLLM 的高效推理 - -llama.cpp 用法请参考[我们的fork llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpmv-main/examples/llava/README-minicpmv2.6.md), 在iPad上可以支持 16~18 token/s 的流畅推理(测试环境:iPad Pro + M4)。 - -Ollama 用法请参考[我们的fork Ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md), 在iPad上可以支持 16~18 token/s 的流畅推理(测试环境:iPad Pro + M4)。 - -
-点击查看, vLLM 现已官方支持MiniCPM-o 2.6、MiniCPM-V 2.6、MiniCPM-Llama3-V 2.5 和 MiniCPM-V 2.0。 -1. 安装 vLLM(>=0.7.1): - -```shell -pip install vllm -``` - -2. 运行示例代码:(注意:如果使用本地路径的模型,请确保模型代码已更新到Hugging Face上的最新版) - - * [图文示例](https://docs.vllm.ai/en/latest/getting_started/examples/vision_language.html) - * [音频示例](https://docs.vllm.ai/en/latest/getting_started/examples/audio_language.html) - -
- - -## 微调 - -### 简易微调 - -我们支持使用 Huggingface Transformers 库简易地微调 MiniCPM-V 4.0、MiniCPM-o 2.6、MiniCPM-V 2.6、MiniCPM-Llama3-V 2.5 和 MiniCPM-V 2.0 模型。 - -[参考文档](./finetune/readme.md) - - -### 使用 Align-Anything - -我们支持使用北大团队开发的 [Align-Anything](https://github.com/PKU-Alignment/align-anything) 框架微调 MiniCPM-o 系列模型,同时支持 DPO 和 SFT 在视觉和音频模态上的微调。Align-Anything 是一个用于对齐全模态大模型的高度可扩展框架,开源了[数据集、模型和评测](https://huggingface.co/datasets/PKU-Alignment/align-anything)。它支持了 30+ 开源基准,40+ 模型,以及包含SFT、SimPO、RLHF在内的多种算法,并提供了 30+ 直接可运行的脚本,适合初学者快速上手。 - -最佳实践: [MiniCPM-o 2.6](https://github.com/PKU-Alignment/align-anything/tree/main/scripts). - - -### 使用 LLaMA-Factory - -我们支持使用 LLaMA-Factory 微调 MiniCPM-o 2.6 和 MiniCPM-V 2.6。LLaMA-Factory 提供了一种灵活定制 200 多个大型语言模型(LLM)微调(Lora/Full/Qlora)解决方案,无需编写代码,通过内置的 Web 用户界面 LLaMABoard 即可实现训练/推理/评估。它支持多种训练方法,如 sft/ppo/dpo/kto,并且还支持如 Galore/BAdam/LLaMA-Pro/Pissa/LongLoRA 等高级算法。 - -最佳实践: [MiniCPM-V 4.0 | MiniCPM-o 2.6 | MiniCPM-V 2.6](./docs/llamafactory_train_and_infer.md). - - -### 使用 SWIFT 框架 - -我们支持使用 SWIFT 框架微调 MiniCPM-V 系列模型。SWIFT 支持近 200 种大语言模型和多模态大模型的训练、推理、评测和部署。支持 PEFT 提供的轻量训练方案和完整的 Adapters 库支持的最新训练技术如 NEFTune、LoRA+、LLaMA-PRO 等。 - -参考文档:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md),[MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md) [MiniCPM-V 2.6](https://github.com/modelscope/ms-swift/issues/1613). ## MiniCPM-V & o 使用手册 @@ -1927,20 +1726,1074 @@ pip install vllm 我们支持从个人用户到企业和研究者的广泛用户群体。 -* **个人用户**:借助[Ollama](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-v4_ollama.md)和[Llama.cpp](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-v4_llamacpp.md),仅需极简设置即可轻松进行模型推理。 -* **企业用户**:通过[vLLM](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_vllm.md)和[SGLang](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-v4_sglang.md)实现高吞吐量、可扩展的高性能部署。 -* **研究者**:利用包括[Transformers](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_full.md)、[LLaMA-Factory](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_llamafactory.md)、[SWIFT](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/swift.md)和[Align-anything](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/align_anything.md)在内的先进框架,进行灵活的模型开发和前沿实验。 +* **个人用户**:借助 Ollama ([V4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-v4_ollama.md), [o4.5](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/ollama/minicpm-o4_5_ollama.md)) 和 Llama.cpp ([V4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-v4_llamacpp.md), [o4.5](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/llama.cpp/minicpm-o4_5_llamacpp.md)),仅需极简设置即可轻松进行模型推理。 +* **企业用户**:通过 vLLM ([V4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-v4_vllm.md), [o4.5](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/vllm/minicpm-o4_5_vllm.md)) 和 SGLang ([V4](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-v4_sglang.md), [o4.5](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/deployment/sglang/MiniCPM-o4_5_sglang.md)) 实现高吞吐量、可扩展的高性能部署。 +* **研究者**:利用包括[Transformers](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_full.md)、[LLaMA-Factory](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/finetune_llamafactory.md)、[SWIFT](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/swift.md)和[Align-anything](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/finetune/align_anything.md)在内的先进框架,进行灵活的模型开发和前沿实验。 **多样化的部署场景** 我们的生态系统为各种硬件环境和部署需求提供最优解决方案。 -* **Web Demo**:使用[FastAPI](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/README.md)快速启动交互式多模态 AI Web 演示。 -* **量化部署**:通过[GGUF](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/gguf/minicpm-v4_gguf_quantize.md)和[BNB](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/bnb/minicpm-v4_bnb_quantize.md)量化技术,最大化效率并最小化资源消耗。 -* **边缘设备**:将强大的 AI 体验带到[iPhone 和 iPad](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/ios_demo/ios.md),支持离线及隐私敏感的应用场景。 +* **Web Demo**:全双工实时视频交互方案,具有高响应、低延迟的特性。[WebRTC_Demo](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/web_demo/WebRTC_Demo/README_zh.md)。 +* **量化部署**:通过[GGUF](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/gguf/minicpm-v4_gguf_quantize.md)和[BNB](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/bnb/minicpm-v4_bnb_quantize.md)量化技术,最大化效率并最小化资源消耗。 +* **边缘设备**:将强大的 AI 体验带到[iPhone 和 iPad](https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/demo/ios_demo/ios.md),支持离线及隐私敏感的应用场景。 +## 模型库 + +| 模型 | 设备 | 资源 |          简介 | 下载链接 | +|:-----------|:--:|:-----------:|:-------------------|:---------------:| +| MiniCPM-o 4.5| GPU | 19 GB | 最新版本,提供出色的视觉、语音、多模态流式交互能力的端侧模型。 | [🤗](https://huggingface.co/openbmb/MiniCPM-o-4_5)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-4_5) | +| MiniCPM-o 4.5 gguf| GPU | 10 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-o-4_5-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-4_5-gguf) | +| MiniCPM-o 4.5 AWQ | GPU | 11 GB | AWQ 量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4_5-AWQ)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-o-4_5-awq) | +| MiniCPM-V 4.0| GPU | 9 GB | 高效的端侧视觉理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4) | +| MiniCPM-V 4.0 gguf | CPU | 4 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4-gguf) | +| MiniCPM-V 4.0 int4 | GPU | 5 GB | int4 量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4-int4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4-int4) | +| MiniCPM-V 4.0 AWQ | GPU | 5 GB | AWQ 量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-4-AWQ)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-4-AWQ) | +更多[历史版本模型](#legacy-models) + +## 简易推理 + +基于 Hugging Face Transformers 在 NVIDIA GPU 上进行推理。请确保安装 `transformers==4.51.0`,其他版本可能存在兼容性问题(排查中)。以下依赖已在 Python 3.10 环境测试通过: + +- 不使用 TTS 或流式推理: +```bash +pip install "transformers==4.51.0" accelerate "torch>=2.3.0,<=2.8.0" "torchaudio<=2.8.0" "minicpmo-utils>=1.0.2" +``` + +- 使用 TTS 或流式推理: +```bash +pip install "transformers==4.51.0" accelerate "torch>=2.3.0,<=2.8.0" "torchaudio<=2.8.0" "minicpmo-utils[all]>=1.0.2" +``` + + +### 模型初始化 + +
+点击展开模型初始化示例代码 + +```python +import torch +from transformers import AutoModel + +# 加载全模态模型(默认:init_vision=True, init_audio=True, init_tts=True) +# 仅视觉模型:设置 init_audio=False 和 init_tts=False +# 仅音频模型:设置 init_vision=False +model = AutoModel.from_pretrained( + "openbmb/MiniCPM-o-4_5", + trust_remote_code=True, + attn_implementation="sdpa", # sdpa or flash_attention_2 + torch_dtype=torch.bfloat16, + init_vision=True, + init_audio=True, + init_tts=True, +) +model.eval().cuda() + +# 初始化 TTS 模块,用于对话(streaming=False)或流式模式(streaming=True)的音频输出 +model.init_tts(streaming=False) + +# 将单工模型转换为双工模式 +duplex_model = model.as_duplex() + +# 将双工模型转换回单工模式 +simplex_model = duplex_model.as_simplex(reset_session=True) +``` + +
+ + +### 双工全模态模式 + +全双工流式推理,支持实时或录制视频的对话场景。 + +
+点击展开双工全模态模式示例代码 + +```python +import librosa +import torch +from minicpmo.utils import generate_duplex_video, get_video_frame_audio_segments +from transformers import AutoModel + +# Load model and convert to duplex mode +model = AutoModel.from_pretrained( + "openbmb/MiniCPM-o-4_5", + trust_remote_code=True, + attn_implementation="sdpa", # or "flash_attention_2" + torch_dtype=torch.bfloat16, +) +model.eval().cuda() +model = model.as_duplex() + +# Load video and reference audio +video_path = "assets/omni_duplex1.mp4" +ref_audio_path = "assets/HT_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) + +# Extract video frames and audio segments +video_frames, audio_segments, stacked_frames = get_video_frame_audio_segments( + video_path, stack_frames=1, use_ffmpeg=True, adjust_audio_length=True +) + +# Prepare duplex session with system prompt and voice reference +model.prepare( + prefix_system_prompt="Streaming Omni Conversation.", + ref_audio=ref_audio, + prompt_wav_path=ref_audio_path, +) + +results_log = [] +timed_output_audio = [] + +# Process each chunk in streaming fashion +for chunk_idx in range(len(audio_segments)): + audio_chunk = audio_segments[chunk_idx] if chunk_idx < len(audio_segments) else None + frame = video_frames[chunk_idx] if chunk_idx < len(video_frames) else None + frame_list = [] + if frame is not None: + frame_list.append(frame) + if stacked_frames is not None and chunk_idx < len(stacked_frames) and stacked_frames[chunk_idx] is not None: + frame_list.append(stacked_frames[chunk_idx]) + + # Step 1: Streaming prefill + model.streaming_prefill( + audio_waveform=audio_chunk, + frame_list=frame_list, + max_slice_nums=1, # Increase for HD mode (e.g., [2, 1] for stacked frames) + batch_vision_feed=False, # Set True for faster processing + ) + + # Step 2: Streaming generate + result = model.streaming_generate( + prompt_wav_path=ref_audio_path, + max_new_speak_tokens_per_chunk=20, + decode_mode="sampling", + ) + + if result["audio_waveform"] is not None: + timed_output_audio.append((chunk_idx, result["audio_waveform"])) + + chunk_result = { + "chunk_idx": chunk_idx, + "is_listen": result["is_listen"], + "text": result["text"], + "end_of_turn": result["end_of_turn"], + "current_time": result["current_time"], + "audio_length": len(result["audio_waveform"]) if result["audio_waveform"] is not None else 0, + } + results_log.append(chunk_result) + + print("listen..." if result["is_listen"] else f"speak> {result['text']}") + +# Generate output video with AI responses +# Please install Chinese fonts (fonts-noto-cjk or fonts-wqy-microhei) to render CJK subtitles correctly. +# apt-get install -y fonts-noto-cjk fonts-wqy-microhei +# fc-cache -fv +generate_duplex_video( + video_path=video_path, + output_video_path="duplex_output.mp4", + results_log=results_log, + timed_output_audio=timed_output_audio, + output_sample_rate=24000, +) +``` + +
+ + +### 单工全模态模式 + +#### 对话推理 + +
+点击展开对话推理示例代码 + +```python +from minicpmo.utils import get_video_frame_audio_segments + +model = ... +model.init_tts(streaming=False) + +video_path = "assets/Skiing.mp4" + +# Optional: Set reference audio for voice cloning +ref_audio_path = "assets/HT_ref_audio.wav" +sys_msg = model.get_sys_prompt(ref_audio=ref_audio_path, mode="omni", language="en") + +# Use stack_frames=5 for high refresh rate mode +video_frames, audio_segments, stacked_frames = get_video_frame_audio_segments(video_path, stack_frames=1) +omni_contents = [] +for i in range(len(video_frames)): + omni_contents.append(video_frames[i]) + omni_contents.append(audio_segments[i]) + if stacked_frames is not None and stacked_frames[i] is not None: + omni_contents.append(stacked_frames[i]) + +msg = {"role": "user", "content": omni_contents} +msgs = [sys_msg, msg] + +# Set generate_audio=True and output_audio_path to save TTS output +generate_audio = True +output_audio_path = "output.wav" + +res = model.chat( + msgs=msgs, + max_new_tokens=4096, + do_sample=True, + temperature=0.7, + use_tts_template=True, + enable_thinking=False, + omni_mode=True, # Required for omni inference + generate_audio=generate_audio, + output_audio_path=output_audio_path, + max_slice_nums=1, # Increase for HD mode +) +print(res) +``` + +
+ +#### 流式推理 + +
+点击展开流式推理示例代码 + +```python +import librosa +import numpy as np +import soundfile as sf +import torch +from minicpmo.utils import get_video_frame_audio_segments + +model = ... +model.init_tts(streaming=True) + +# Reset session for a new conversation (clears KV cache) +model.reset_session() + +# Optional: Load reference audio for voice cloning +ref_audio_path = "assets/HT_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) +model.init_token2wav_cache(ref_audio) + +session_id = "demo" + +# Extract video frames and audio segments (use stack_frames=5 for high refresh rate mode) +video_path = "assets/Skiing.mp4" +video_frames, audio_segments, stacked_frames = get_video_frame_audio_segments(video_path, stack_frames=1) + +# Build omni contents list +omni_contents = [] +for i in range(len(video_frames)): + omni_contents.append(video_frames[i]) + omni_contents.append(audio_segments[i]) + if stacked_frames is not None and stacked_frames[i] is not None: + omni_contents.append(stacked_frames[i]) + +generate_audio = False +output_audio_path = "output.wav" + +# Step 1: Prefill system prompt +sys_msg = model.get_sys_prompt(ref_audio=ref_audio, mode="omni", language="en") +model.streaming_prefill(session_id=session_id, msgs=[sys_msg]) + +# Step 2: Prefill omni chunks (is_last_chunk=True only for the last audio chunk) +audio_indices = [i for i, c in enumerate(omni_contents) if isinstance(c, np.ndarray)] +last_audio_idx = audio_indices[-1] if audio_indices else -1 + +for idx, content in enumerate(omni_contents): + is_last_audio_chunk = idx == last_audio_idx + msgs = [{"role": "user", "content": [content]}] + model.streaming_prefill(session_id=session_id, msgs=msgs, omni_mode=True, is_last_chunk=is_last_audio_chunk) + +# Step 3: Generate response +iter_gen = model.streaming_generate( + session_id=session_id, + generate_audio=generate_audio, + use_tts_template=True, + enable_thinking=False, + do_sample=True, +) + +audios = [] +text = "" + +if generate_audio: + for wav_chunk, text_chunk in iter_gen: + audios.append(wav_chunk) + text += text_chunk + + generated_waveform = torch.cat(audios, dim=-1)[0] + sf.write(output_audio_path, generated_waveform.cpu().numpy(), samplerate=24000) + + print("Text:", text) + print("Audio saved to output.wav") +else: + for text_chunk, is_finished in iter_gen: + text += text_chunk + print("Text:", text) +``` + +
+ + +### 语音与音频模式 + +#### 自定义音色与人设的语音对话 + +MiniCPM-o-4.5 可根据音频提示和文本人设进行角色扮演。模型会模仿角色的声音特征,并在文本回复中采用其语言风格,同时遵循文本人设中的设定。在此模式下,MiniCPM-o-4.5 的语音输出**更加自然、更具人情味**。 + +
+点击展开自定义音色与人设示例代码 + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +# Set reference audio for voice cloning +ref_audio_path = "assets/system_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) + +# For English conversation with text profile +sys_msg = { + "role": "system", + "content": [ + "Clone the voice in the provided audio prompt.", + ref_audio, + "Please chat with the user in a highly human-like and oral style." + "You are Elon Musk, CEO of Tesla and SpaceX. You speak directly and casually, often with dry humor. You're passionate about Mars, sustainable energy, and pushing humanity forward. Speak bluntly with occasional dark humor. Use simple logic and don't sugarcoat things. Don't be diplomatic. Say what you actually think, even if it's controversial. Keep responses around 100 words. Don't ramble." + ] +} + + +# For English conversation with no text profile +sys_msg = { + "role": "system", + "content": [ + "Clone the voice in the provided audio prompt.", + ref_audio, + "Your task is to be a helpful assistant using this voice pattern. Please answer the user's questions seriously and in a high quality. Please chat with the user in a high naturalness style." + ] +} + +# For Chinese Conversation with no text profile +sys_msg = { + "role": "system", + "content": [ + "根据输入的音频提示生成相似的语音。", + librosa.load("assets/system_ref_audio_2.wav", sr=16000, mono=True)[0], + "作为助手,你将使用这种声音风格说话。 请认真、高质量地回复用户的问题。 请用高自然度的方式和用户聊天。" + ] +} + + +# For Chinese Conversation with text profile +sys_msg = { + "role": "system", + "content": [ + "根据输入的音频提示生成相似的语音。", + ref_audio, + "你是一个具有以上声音风格的AI助手。请用高拟人度、口语化的方式和用户聊天。" + "你是一名心理咨询师兼播客主理人,热爱创作与深度对话。你性格细腻、富有共情力,善于从个人经历中提炼哲思。语言风格兼具理性与诗意,常以隐喻表达内在体验。" + ] +} + + +# Round 1 +user_audio, _ = librosa.load("user_question.wav", sr=16000, mono=True) +user_msg = {"role": "user", "content": [user_audio]} + +msgs = [sys_msg, user_msg] + +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.7, + output_audio_path="result_roleplay_round_1.wav", +) +print("Round 1:", res) + +# Round 2 (multi-turn conversation) +msgs.append({"role": "assistant", "content": res}) +user_audio_2, _ = librosa.load("user_question_2.wav", sr=16000, mono=True) +msgs.append({"role": "user", "content": [user_audio_2]}) + +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.7, + output_audio_path="result_roleplay_round_2.wav", +) +print("Round 2:", res) +``` + +
+ + +#### 多功能有调性的 AI 语音助手 + +基于精心设计的后训练数据和专业配音演员录制的语料,模型开箱即用,提供高质量的语音交互体验。输出声音甜美富有表现力,韵律自然,节奏、重音和停顿恰到好处,在日常对话中极具亲和力。同时支持故事讲述和叙事性语音,表达连贯且引人入胜。此外,MiniCPM-o-4.5 支持高级语音指令控制。通过高层语音命令数据,用户可调节语速、情感语气、词级重音和停顿。模型还能理解情境线索,在多轮对话中灵活切换说话风格,实现更具表现力和可控性的语音交互。 + +
+点击展开多功能语音助手示例代码 + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +# Set reference audio for voice style +ref_audio_path = "assets/HT_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) + +# For Chinese Conversation +sys_msg = { + "role": "system", + "content": [ + "模仿输入音频中的声音特征。", + ref_audio, + "你的任务是用这种声音模式来当一个助手。请认真、高质量地回复用户的问题。请用高自然度的方式和用户聊天。你是由面壁智能开发的人工智能助手:面壁小钢炮。" + ] +} + +# For English Conversation +sys_msg = { + "role": "system", + "content": [ + "Clone the voice in the provided audio prompt.", + ref_audio, + "Please assist users while maintaining this voice style. Please answer the user's questions seriously and in a high quality. Please chat with the user in a highly human-like and oral style. You are a helpful assistant developed by ModelBest: MiniCPM-Omni." + ] +} + +##### Mimick + +# Load user's audio question +user_audio, _ = librosa.load("user_question.wav", sr=16000, mono=True) +user_question = {"role": "user", "content": [user_audio]} + +# Round 1 +msgs = [sys_msg, user_question] +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.7, + output_audio_path="result_assistant_round_1.wav", +) + +# Round 2 +msgs.append({"role": "assistant", "content": res}) +user_audio_2, _ = librosa.load("user_question_2.wav", sr=16000, mono=True) +msgs.append({"role": "user", "content": [user_audio_2]}) +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.3, + output_audio_path="result_assistant_round_2.wav", +) +print(res) +``` + +
+ +##### 可配置声音的语音对话 + +`MiniCPM-o-4.5` 支持零样本文本转语音(TTS)。在此模式下,模型作为高自然度的 TTS 系统运行,可复刻参考音频的音色。 + +
+点击展开可配置声音示例代码 + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +# For both Chinese and English +ref_audio_path = "assets/HT_ref_audio.wav" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) +sys_msg = {"role": "assistant", "content": [ + "模仿音频样本的音色并生成新的内容。", + ref_audio, + "请用这种声音风格来为用户提供帮助。 直接作答,不要有冗余内容" +]} + +# For English +user_msg = { + "role": "user", + "content": [ + "请朗读以下内容。" + " " + "I have a wrap up that I want to offer you now, a conclusion to our work together." + ] +} + +# For Chinese +user_msg = { + "role": "user", + "content": [ + "请朗读以下内容。" + " " + "你好,欢迎来到艾米说科幻,我是艾米。" + ] +} + +msgs = [sys_msg, user_msg] +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.1, + output_audio_path="result_voice_cloning.wav", +) +``` + +
+ + +#### 实时语音对话 + +
+点击展开实时语音对话示例代码 + +```python +import librosa +import numpy as np +import torch +import soundfile as sf + +model = ... + +# Set reference audio for voice style +ref_audio_path = "ref_audio_path" +ref_audio, _ = librosa.load(ref_audio_path, sr=16000, mono=True) + +# For Chinese Conversation +sys_msg = { + "role": "system", + "content": [ + "模仿输入音频中的声音特征。", + ref_audio, + "你的任务是用这种声音模式来当一个助手。请认真、高质量地回复用户的问题。请用高自然度的方式和用户聊天。你是由面壁智能开发的人工智能助手:面壁小钢炮。" + ] +} + +# You can use each type of system prompt mentioned above in streaming speech conversation + +# Reset state +model.init_tts(streaming=True) +model.reset_session(reset_token2wav_cache=True) +model.init_token2wav_cache(prompt_speech_16k=ref_audio) + +session_id = "demo" +msgs = [...] +# First, prefill system turn +for msg in msgs: + if msg["role"] == "system": + model.streaming_prefill( + session_id=session_id, + msgs=[msg], + omni_mode=False, + is_last_chunk=False, + ) + +# Here we simulate realtime speech conversation by splitting whole user input audio into chunks of 1s. +user_audio, _ = librosa.load("user_audio.wav", sr=16000, mono=True) + +IN_SAMPLE_RATE = 16000 +CHUNK_SAMPLES = IN_SAMPLE_RATE +OUT_SAMPLE_RATE = 24000 + +if user_audio is not None and len(user_audio) > 0: + total_samples = len(user_audio) + num_chunks = (total_samples + CHUNK_SAMPLES - 1) // CHUNK_SAMPLES + + for chunk_idx in range(num_chunks): + start = chunk_idx * CHUNK_SAMPLES + end = min((chunk_idx + 1) * CHUNK_SAMPLES, total_samples) + chunk_audio = user_audio[start:end] + + is_last_chunk = (chunk_idx == num_chunks - 1) + + user_msg = {"role": "user", "content": [chunk_audio]} + + # For each 1s audio chunk, perform streaming_prefill once to reduce first-package latency + model.streaming_prefill( + session_id=session_id, + msgs=[user_msg], + omni_mode=False, + is_last_chunk=is_last_chunk, + ) + +# When VAD actives, make the model generate response in a streaming manner +# Here we create an iterator without returning anything +audio_iter = model.streaming_generate( + session_id=session_id, + generate_audio=True, + max_new_tokens=512, + do_sample=True, + length_penalty=1.1, # For realtime speech conversation mode, we suggest length_penalty=1.1 to improve response content +) + +# Yield 1s-length audio response from the iterator just created in a streaming manner +audio_chunks = [] +for generated_audio in audio_iter: + if generated_audio is None: + break + + if isinstance(generated_audio, tuple): + generated_audio = generated_audio[0] + if generated_audio is None: + break + + if isinstance(generated_audio, torch.Tensor): + wav = generated_audio.detach().cpu().numpy() + else: + wav = np.asarray(generated_audio) + + if wav.ndim == 2: + wav = wav[0] + + wav = wav.astype(np.float32) + audio_chunks.append(wav) + +output_audio_path = "realtime_speech.wav" +if audio_chunks: + full_audio = np.concatenate(audio_chunks, axis=-1) + sf.write(output_audio_path, full_audio, OUT_SAMPLE_RATE) +``` + +
+ + +##### 更多语音任务 + +`Mimick` 任务用于评估模型的端到端语音建模能力。模型接收音频输入,进行转录后高保真地重建原始音频,保留详细的声学特征、韵律情感及语义信息。重建音频与原始音频的相似度越高,表明端到端语音建模能力越强。 + +
+点击展开 Mimick 示例代码 + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +system_prompt = "You are a helpful assistant. You can accept video, audio, and text input and output voice and text. Respond with just the answer, no redundancy." + +mimick_prompt = "Please repeat the following speech in the appropriate language." + +audio_input, _ = librosa.load("assets/Trump_WEF_2018_10s.mp3", sr=16000, mono=True) + +msgs = [ + {"role": "system", "content": [system_prompt]}, + {"role": "user", "content": [mimick_prompt, audio_input]} + ] + +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + temperature=0.1, + generate_audio=True, + output_audio_path="output_mimick.wav", +) +``` + +
+ + +#### 多种音频理解任务 + +`MiniCPM-o-4.5` 可处理多种音频理解任务,包括语音识别(ASR)、说话人分析、通用音频描述和声音场景标注。 + +对于音频转文本任务,可使用以下提示词: + +- ASR (Chinese, or AST EN→ZH): `请仔细听这段音频片段,并将其内容逐字记录。` +- ASR (English, or AST ZH→EN): `Please listen to the audio snippet carefully and transcribe the content.` +- Speaker Analysis: `Based on the speaker's content, speculate on their gender, condition, age range, and health status.` +- General Audio Caption: `Summarize the main content of the audio.` +- Sound Scene Tagging: `Utilize one keyword to convey the audio's content or the associated scene.` + +
+点击展开音频理解示例代码 + +```python +import librosa + +model = ... +model.init_tts(streaming=False) + +# Load the audio to be transcribed/analyzed +audio_input, _ = librosa.load("assets/Trump_WEF_2018_10s.mp3", sr=16000, mono=True) + +# Choose a task prompt (see above for options) +task_prompt = "Please listen to the audio snippet carefully and transcribe the content.\n" +msgs = [{"role": "user", "content": [task_prompt, audio_input]}] + +res = model.chat( + msgs=msgs, + do_sample=True, + max_new_tokens=512, + use_tts_template=True, + generate_audio=True, + temperature=0.3, + output_audio_path="result_audio_understanding.wav", +) +print(res) +``` + +
+ + +### 纯视觉模式 + +`MiniCPM-o-4.5` 的推理方式与 `MiniCPM-V-4.5` 一致。 + +#### 单图对话 + +
+点击展开单图对话示例代码 + +```python +import torch +from PIL import Image +from transformers import AutoModel + +model = AutoModel.from_pretrained( + "openbmb/MiniCPM-o-4_5", + trust_remote_code=True, + attn_implementation="sdpa", # or "flash_attention_2" + torch_dtype=torch.bfloat16, + init_vision=True, + init_audio=False, + init_tts=False, +) +model.eval().cuda() + +image = Image.open("assets/fossil.png").convert("RGB") +question = "What is in the image?" +msgs = [{"role": "user", "content": [image, question]}] + +res = model.chat(msgs=msgs, use_tts_template=False) +print(res) +``` + +
+ +#### 多图对话 + +
+点击展开多图输入示例代码 + +```python +import torch +from PIL import Image +from transformers import AutoModel + +model = ... + +image1 = Image.open("assets/highway.png").convert("RGB") +image2 = Image.open("assets/fossil.png").convert("RGB") +question = "Compare image 1 and image 2, tell me about the differences between them." +msgs = [{"role": "user", "content": [image1, image2, question]}] + +answer = model.chat(msgs=msgs, use_tts_template=False, enable_thinking=False) +print(answer) +``` + +
+ +#### In-Context 少样本推理 + +
+点击展开 In-Context 少样本推理示例代码 + +```python +from PIL import Image + +model = ... + +question = "production date" +image1 = Image.open("example1.jpg").convert("RGB") +answer1 = "2023.08.04" +image2 = Image.open("example2.jpg").convert("RGB") +answer2 = "2007.04.24" +image_test = Image.open("test.jpg").convert("RGB") + +msgs = [ + {"role": "user", "content": [image1, question]}, + {"role": "assistant", "content": [answer1]}, + {"role": "user", "content": [image2, question]}, + {"role": "assistant", "content": [answer2]}, + {"role": "user", "content": [image_test, question]}, +] + +answer = model.chat(msgs=msgs, use_tts_template=False, enable_thinking=False) +print(answer) +``` + +
+ +#### 视频对话 + +
+点击展开视频输入示例代码 + +```python +import torch +from minicpmo.utils import get_video_frame_audio_segments +from transformers import AutoModel + +model = ... + +video_path = "assets/Skiing.mp4" +video_frames, _, _ = get_video_frame_audio_segments(video_path) +print("num frames:", len(video_frames)) + +question = "Describe the video" +msgs = [{"role": "user", "content": video_frames + [question]}] + +answer = model.chat( + msgs=msgs, + max_new_tokens=128, + use_image_id=False, + max_slice_nums=1, + use_tts_template=False, + enable_thinking=False, # Set True to enable thinking mode +) +print(answer) +``` + +
+ +### 结构化内容输入 + +
+点击展开结构化内容输入 + +`chat` 方法支持两种消息内容格式: + +**原生格式** — 直接传入 Python 对象: +```python +msgs = [{"role": "user", "content": [pil_image, audio_ndarray, "Describe this."]}] +``` + +**OpenAI 兼容格式** — 使用结构化字典: +```python +msgs = [ + { + "role": "user", + "content": [ + {"type": "image_url", "image_url": {"url": "/path/to/image.jpg"}}, + {"type": "audio_url", "audio_url": {"url": "/path/to/audio.wav"}}, + {"type": "video_url", "video_url": {"url": "/path/to/video.mp4", "use_audio": True}}, + {"type": "text", "text": "Describe this."} + ] + } +] +``` + +**支持的类型:** + +| 类型 | 输入格式 | 转换为 | +|------|----------|--------| +| `text` | `{"type": "text", "text": "..."}` | `str` | +| `image_url` | `{"type": "image_url", "image_url": {"url": "..."}}` | `PIL.Image` | +| `audio_url` | `{"type": "audio_url", "audio_url": {"url": "..."}}` | `np.ndarray`(16kHz 单声道) | +| `video_url` | `{"type": "video_url", "video_url": {"url": "...", "stack_frames": 1, "use_audio": True}}` | `List[Image, ndarray, ...]` | + +- **URL 来源**:支持本地文件路径或 `http://`/`https://` URL +- **混合格式**:原生对象和结构化字典可在同一 content 列表中混用 + +
+ + +## 框架支持 + +### FlagOS + +为解决不同 AI 芯片大规模落地应用,北京智源研究院联合众多科研机构、芯片企业、系统厂商、算法和软件相关单位等国内外机构共同发起并创立了 FlagOS 开源社区。 + +FlagOS 社区致力于打造面向多种 AI 芯片的统一、开源的系统软件栈,包括大型算子库、统一AI编译器、并行训推框架、统一通信库等核心开源项目,构建「模型-系统-芯片」三层贯通的开放技术生态,通过“一次开发跨芯迁移”释放硬件计算潜力,打破不同芯片软件栈之间生态隔离,有效降低开发者的迁移成本。FlagOS 社区构建人工智能软硬件生态,突破单一闭源垄断,推动AI硬件技术大范围落地发展,立足中国、拥抱全球合作。 +官网速递:https://flagos.io + +
+点击展开 FlagOS 详细 + +#### FlagOS 多 AI 芯片支持 + +基于FlagOS极短时间内适配MiniCPM-o 4.5到 6 种不同的 AI 芯片,得益于众智 FlagOS 的多芯片统一 AI 系统软件栈的能力。目前,在FlagOS团队构建的面向多架构人工智能芯片的大模型自动迁移、适配与发布平台FlagRelease上,已发布MiniCPM-o-4.5的多芯片版本。细节如下: + +| Vendor | ModelScope | Huggingface | +|:----------------|:------------:|:------------:| +| Nvidia | [MiniCPM-o-4.5-nvidia-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | [MiniCPM-o-4.5-nvidia-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | +| Hygon-BW1000 | [MiniCPM-o-4.5-hygon-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-hygon-FlagOS) | [MiniCPM-o-4.5-hygon-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-hygon-FlagOS) | +| Metax-C550 | [MiniCPM-o-4.5-metax-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-metax-FlagOS) | [MiniCPM-o-4.5-metax-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-metax-FlagOS) | +| Iluvatar-BIV150 | [MiniCPM-o-4.5-iluvatar-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-iluvatar-FlagOS) | [MiniCPM-o-4.5-iluvatar-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-iluvatar-FlagOS) | +| Ascend-A3 | [MiniCPM-o-4.5-ascend-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-ascend-FlagOS) | [MiniCPM-o-4.5-ascend-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-ascend-FlagOS) | +| Zhenwu-810E | [MiniCPM-o-4.5-zhenwu-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-zhenwu-FlagOS) | [MiniCPM-o-4.5-zhenwu-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-zhenwu-FlagOS) | + +##### 综合评估 + +###### Transformers–FlagOS 版本 + +在多后端环境下使用 `USE_FLAGOS=1` 与在 NVIDIA CUDA 上使用 `USE_FLAGOS=0` 的精度差异 + +| 指标 | FlagOS 后端 | 与 Nvidia-CUDA 相比差异 | +|:-------------------------|:---------------:|:---------------------------:| +| Video-MME 0-shot avg@1 ↑ | Nvidia | 0.33% | +| Video-MME 0-shot avg@1 ↑ | Hygon-BW1000 | 0.17% | +| Video-MME 0-shot avg@1 ↑ | Ascend-A3 | 0.50% | +| Video-MME 0-shot avg@1 ↑ | Iluvatar-BIV150 | 1.83% | +| Video-MME 0-shot avg@1 ↑ | Metax-C550 | 0.75% | + +###### VLLM-FlagOS 版本 + +在 NVIDIA 上使用 `USE_FLAGGEMS=1 FLAGCX_PATH=/workspace/FlagCX`,或在真武 810E `USE_FLAGGEMS=1`,与直接在 NVIDIA 平台上启动 vLLM Server 的精度差异 + +| 指标 (avg@1) | Nvidia-FlagOS 与 Nvidia-CUDA 的差异 | zhenwu-FlagOS 与 Nvidia-CUDA 的差异 | +|:--------------------|:------------------------------------------------:|:------------------------------------------------:| +| CMMMU ↑ | 0.72% | 3.5% | +| MMMU ↑ | 1.44% | 1.18% | +| MMMU_Pro_standard ↑ | 0.83% | 0.22% | +| MM-Vet v2 ↑ | 0.46% | 1.33% | +| OCRBench ↑ | 0.10% | 1% | +| CII-Bench ↑ | 0.40% | 0.13% | +| Blink ↑ | 1.90% | 2.19% | + + +#### FlagOS 使用方式 + +##### 使用 FlagOS 在Nvidia体验性能加速 + +在Transformers版本上,CUDA生态与FlagOS生态精度对齐的前提下,FlagOS相比CUDA任务的负载执行总时间有6%的性能提升。 + +###### From FlagRelease【推荐】 + +FlagRelease是FlagOS团队构建的一套面向多架构人工智能芯片的大模型自动迁移、适配与发布平台,已发布MiniCPM-o-4.5的多芯片版本。FlagRelase已内置相关软件包,无需用户安装。 + +- FlagRelease 镜像关键版本信息 + + | 加速卡驱动版本 | 驱动版本: 570.158.01 | + |:------------------------|:------------------------------------| + | CUDA SDK Build | cuda_13.0.r13.0/compiler.36424714_0 | + | FlagTree | 0.4.0+3.5 | + | FlagGems | 4.2.1rc0 | + | vllm & vllm-plugin-fl | 0.13.0 + vllm_fl 0.0.0 | + | FlagCX | 0.1.0 | + +- FlagRelease 使用速递 + + | Vendor | ModelScope | Huggingface | + |:-----------|:------------:|:------------:| + | Nvidia | [MiniCPM-o-4.5-nvidia-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | [MiniCPM-o-4.5-nvidia-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | + | Hygon-BW1000 | [MiniCPM-o-4.5-hygon-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-hygon-FlagOS) | [MiniCPM-o-4.5-hygon-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-hygon-FlagOS) | + | Metax-C550 | [MiniCPM-o-4.5-metax-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-metax-FlagOS) | [MiniCPM-o-4.5-metax-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-metax-FlagOS) | + | Iluvatar-BIV150 | [MiniCPM-o-4.5-iluvatar-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-iluvatar-FlagOS) | [MiniCPM-o-4.5-iluvatar-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-iluvatar-FlagOS) | + | Ascend-A3 | [MiniCPM-o-4.5-ascend-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-ascend-FlagOS) | [MiniCPM-o-4.5-ascend-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-ascend-FlagOS) | + | Zhenwu-810E | [MiniCPM-o-4.5-zhenwu-FlagOS](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-zhenwu-FlagOS) | [MiniCPM-o-4.5-zhenwu-FlagOS](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-zhenwu-FlagOS) | + + +##### 从零开始 + +- 依赖Python3.12, GLIBC_2.39, GLIBCXX_3.4.33, CXXABI_1.3.15 环境 + +###### Transformers 版本 + +- 安装FlagOS算子库 + + 官方仓库:https://github.com/flagos-ai/FlagGems + + ```shell + pip install flag-gems==4.2.1rc0 + ``` + +- 安装FlagOS编译器 + + 官方仓库:https://github.com/flagos-ai/flagtree + + 底层依赖库版本速查:https://github.com/flagos-ai/FlagTree/blob/main/documents/build.md#tips-for-building + + ```shell + pip uninstall triton + + python3 -m pip install flagtree==0.4.0+3.5 --index-url=https://resource.flagos.net/repository/flagos-pypi-hosted/simple --trusted-host=https://resource.flagos.net + ``` + +- 开启加速 + + 在欲执行任务的命令前添加 `USE_FLAGOS=1`,例如,当您通过 + ```shell + python3 generate_speech_from_video.py + ``` + + 来使用 MiniCPM-o-4.5 模型根据视频内容生成语音回答时,可以通过 + ```shell + USE_FLAGOS=1 python3 generate_speech_from_video.py + ``` + 来使用 FlagOS 加速这一过程。 + +###### Vllm 版本 + +- 安装FlagOS算子库 + + 官方仓库:https://github.com/flagos-ai/FlagGems + ```shell + pip install flag-gems==4.2.1rc0 + pip install triton==3.5.1 + ``` + +- 开启加速 + 在欲执行任务的命令前添加 `USE_FLAGOS=1`,例如,当您通过 + ```shell + vllm serve ${model_path} --dtype auto --gpu_memory_utilization 0.9 --trust-remote-code --max-num-batched-tokens 2048 --served-model-name cpmo --port ${Port} + ``` + + 来启动MiniCPM-o-4.5服务端时,可以通过 + ```shell + USE_FLAGOS=1 vllm serve ${model_path} --dtype auto --gpu_memory_utilization 0.9 --trust-remote-code --max-num-batched-tokens 2048 --served-model-name cpmo --port ${Port} + ``` + 来使用FlagOS加速这一过程。 + +#### 使用 FlagOS 统一多芯片后端插件 + +[vllm-plugin-FL](https://github.com/flagos-ai/vllm-plugin-FL) 是一个为 vLLM 推理/服务框架构建的插件,它基于 FlagOS 的统一多芯片后端 开发,旨在扩展 vLLM 在多种硬件环境下的功能和性能表现。 + +##### Using vllm-plugin-FL + +| 厂商 | 从零开始 | 从 FlagRelease 开始 | +|:-------|:-----------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------| +| Nvidia | [vllm-plugin-FL/MiniCPM-o-4.5](https://github.com/flagos-ai/vllm-plugin-FL/blob/main/examples/minicpm/README.md) | [MiniCPM-o-4.5-ModelScope](https://modelscope.cn/models/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS), [MiniCPM-o-4.5-Huggingface](https://huggingface.co/FlagRelease/MiniCPM-o-4.5-nvidia-FlagOS) | + +
+ + + +### vLLM、SGLang、llama.cpp、Ollama + +MiniCPM-V 和 MiniCPM-o 模型推理还适配 vLLM、SGLang、llama.cpp、Ollama 等框架。具体用法可以参考我们的[使用指南](https://github.com/OpenSQZ/MiniCPM-V-Cookbook)。 + + +### LLaMA-Factory、SWIFT + +MiniCPM-V 和 MiniCPM-o 模型支持通过 LLaMA-Factory 和 SWIFT 等框架训练。具体用法可以参考我们的[使用指南](https://github.com/OpenSQZ/MiniCPM-V-Cookbook)。 + ## 基于 MiniCPM-V & MiniCPM-o 的更多项目 + - [text-extract-api](https://github.com/CatchTheTornado/text-extract-api): 利用 OCR 和 Ollama 模型的本地化文档提取与解析API,支持PDF、Word、PPTX ![GitHub Repo stars](https://img.shields.io/github/stars/CatchTheTornado/text-extract-api) - [comfyui_LLM_party](https://github.com/heshengtao/comfyui_LLM_party): 基于 ComfyUI 的 LLM Agent 框架,用于构建并集成 LLM 工作流 ![GitHub Repo stars](https://img.shields.io/github/stars/heshengtao/comfyui_LLM_party) - [Ollama-OCR](https://github.com/imanoop7/Ollama-OCR): 通过 Ollama 调用视觉语言模型,从图像和 PDF 中提取文本的 OCR 工具 ![GitHub Repo stars](https://img.shields.io/github/stars/imanoop7/Ollama-OCR) @@ -1950,23 +2803,19 @@ pip install vllm - [paperless-gpt](https://github.com/icereed/paperless-gpt): 利用LLM和视觉模型,为 paperless-ngx 实现AI驱动的文档自动化处理与OCR功能 ![GitHub Repo stars](https://img.shields.io/github/stars/icereed/paperless-gpt) - [Neuro](https://github.com/kimjammer/Neuro): Neuro-Sama的复刻版,完全依赖消费级硬件上的本地模型运行 ![GitHub Repo stars](https://img.shields.io/github/stars/kimjammer/Neuro) -## FAQs -点击查看 [FAQs](./docs/faqs.md) - - ## 模型局限性 -我们实验发现 MiniCPM-o 2.6 存在一些显著的局限性,需要进一步研究和改进: -- **不稳定的语音输出。** 语音生成可能会受到背景噪音和无意义声音的影响,表现不稳定。 -- **重复响应。** 当遇到连续相似的用户请求时,模型往往会重复相同的回答。 -- **Web Demo 延迟较高。** 用户在使用远程服务器上部署的 web demo 时可能会产生较高延迟。我们推荐用户在本地部署来获得更低延迟的体验。 +我们实验发现 MiniCPM-o 4.5 存在一些显著的局限性,需要进一步研究和改进: +- **基础能力局限性**:全双工多模态实时流的基础能力仍有待进一步提升。 +- **全双工多模态流式模式下语音输出不稳定**:在全双工多模态实时流模式下,语音合成可能会出现字音误读(如多音字或生僻字)。 +- **中英混杂**:在语音和全模态模式下,模型有时会以中英混杂的方式进行回答。 +- **Web Demo 延迟较高**:由于我们的在线 Demo 托管在海外服务器上,用户可能会遇到异常的高延迟或者一部分模型输出丢失。我们建议在本地环境部署 Demo 或在良好的网络连接下使用。 ## 模型协议 * 本仓库中代码依照 [Apache-2.0](https://github.com/OpenBMB/MiniCPM-V/blob/main/LICENSE) 协议开源 - -* 为帮助我们进一步了解并支持社区用户,若您能考虑填写一份简短的登记问卷,我们将深表感谢。 ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g). +* 为帮助我们进一步了解并支持社区用户,若您能考虑填写一份简短的登记问卷,我们将深表感谢。 ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g). ## 声明 @@ -1975,6 +2824,17 @@ pip install vllm 因此用户在使用本项目的系列模型生成的内容时,应自行负责对其进行评估和验证。如果由于使用本项目的系列开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 +## 致谢 + +我们对下列项目表示衷心感谢: + +* [Qwen3](https://huggingface.co/Qwen/Qwen3-8B) 提供了文本基座 +* [SigLIP2](https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/image_text/README_siglip2.md) 提供了视觉理解模块 +* [Whisper](https://github.com/openai/whisper) 提供了音频理解模块 +* [CosyVoice2](https://github.com/FunAudioLLM/CosyVoice) 和 [Step-Audio2](https://github.com/stepfun-ai/Step-Audio2) 提供了语音分词器和高效的语音波形解码模块 +* [Transformers](https://github.com/huggingface/transformers) + + ## 机构 本项目由以下机构共同开发: @@ -1984,7 +2844,6 @@ pip install vllm ## 🌟 Star History -

@@ -2016,8 +2875,6 @@ pip install vllm [VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLPR](https://github.com/OpenBMB/RLPR) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) - - ## 引用 如果您觉得我们模型/代码/论文有帮助,请给我们 ⭐ 和 引用 📝,感谢! diff --git a/assets/minicpm-o-45-framework.pdf b/assets/minicpm-o-45-framework.pdf new file mode 100644 index 0000000..99ab396 Binary files /dev/null and b/assets/minicpm-o-45-framework.pdf differ diff --git a/assets/minicpm-o-45-framework.png b/assets/minicpm-o-45-framework.png new file mode 100644 index 0000000..a97cb6d Binary files /dev/null and b/assets/minicpm-o-45-framework.png differ diff --git a/assets/minicpm-o-45-radar.png b/assets/minicpm-o-45-radar.png new file mode 100644 index 0000000..869bdc6 Binary files /dev/null and b/assets/minicpm-o-45-radar.png differ diff --git a/assets/minicpm_o_45_main_exp_table.png b/assets/minicpm_o_45_main_exp_table.png new file mode 100644 index 0000000..37faca7 Binary files /dev/null and b/assets/minicpm_o_45_main_exp_table.png differ diff --git a/assets/minicpmo4_5/en_cot.png b/assets/minicpmo4_5/en_cot.png new file mode 100644 index 0000000..95ca375 Binary files /dev/null and b/assets/minicpmo4_5/en_cot.png differ diff --git a/assets/minicpmo4_5/en_doc.png b/assets/minicpmo4_5/en_doc.png new file mode 100644 index 0000000..82bf93c Binary files /dev/null and b/assets/minicpmo4_5/en_doc.png differ diff --git a/assets/minicpmo4_5/video_play.png b/assets/minicpmo4_5/video_play.png new file mode 100644 index 0000000..aa6dfcd Binary files /dev/null and b/assets/minicpmo4_5/video_play.png differ diff --git a/assets/minicpmo4_5/zh_doc.png b/assets/minicpmo4_5/zh_doc.png new file mode 100644 index 0000000..5407fe4 Binary files /dev/null and b/assets/minicpmo4_5/zh_doc.png differ diff --git a/assets/radar_minicpmo4.5.png b/assets/radar_minicpmo4.5.png new file mode 100644 index 0000000..1580c79 Binary files /dev/null and b/assets/radar_minicpmo4.5.png differ diff --git a/docs/minicpm_o2dot6_en.md b/docs/minicpm_o2dot6_en.md new file mode 100644 index 0000000..0d05d43 --- /dev/null +++ b/docs/minicpm_o2dot6_en.md @@ -0,0 +1,964 @@ +## MiniCPM-o 2.6 + +> Archieve at: 2026-02-02 + +**MiniCPM-o 2.6** is the latest and most capable model in the MiniCPM-o series. The model is built in an end-to-end fashion based on SigLip-400M, Whisper-medium-300M, ChatTTS-200M, and Qwen2.5-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.6, and introduces new features for real-time speech conversation and multimodal live streaming. Notable features of MiniCPM-o 2.6 include: + +- 🔥 **Leading Visual Capability.** + MiniCPM-o 2.6 achieves an average score of 70.2 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-202405, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. It also **outperforms GPT-4V and Claude 3.5 Sonnet** in multi-image and video understanding, and shows promising in-context learning capability. + +- 🎙 **State-of-the-art Speech Capability.** MiniCPM-o 2.6 supports **bilingual real-time speech conversation with configurable voices** in English and Chinese. It **outperforms GPT-4o-realtime on audio understanding tasks** such as ASR and STT translation, and shows **state-of-the-art performance on speech conversation in both semantic and acoustic evaluations in the open-source community**. It also allows for fun features such as emotion/speed/style control, end-to-end voice cloning, role play, etc. + +- 🎬 **Strong Multimodal Live Streaming Capability.** As a new feature, MiniCPM-o 2.6 can **accept continuous video and audio streams independent of user queries, and support real-time speech interaction**. It **outperforms GPT-4o-202408 and Claude 3.5 Sonnet and shows state-of-the-art performance in the open-source community on StreamingBench**, a comprehensive benchmark for real-time video understanding, omni-source (video & audio) understanding, and multimodal contextual understanding. + +- 💪 **Strong OCR Capability and Others.** +Advancing popular visual capabilities from MiniCPM-V series, MiniCPM-o 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench for models under 25B, surpassing proprietary models such as GPT-4o-202405**. + Based on the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o and Claude 3.5 Sonnet on MMHal-Bench, and supports **multilingual capabilities** on more than 30 languages. + + +- 🚀 **Superior Efficiency.** + In addition to its friendly size, MiniCPM-o 2.6 also shows **state-of-the-art token density** (i.e., the number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-o 2.6 can efficiently support **multimodal live streaming** on end-side devices such as iPads. + +- 💫 **Easy Usage.** +MiniCPM-o 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-o-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-o-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [LLaMA-Factory](./docs/llamafactory_train_and_infer.md), (5) quick [local WebUI demo](#chat-with-our-demo-on-gradio), and (6) online web demo on [server](https://minicpm-omni-webdemo-us.modelbest.cn/). + +**Model Architecture.** + +- **End-to-end Omni-modal Architecture.** Different modality encoders/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge. The model is trained in a fully end-to-end manner with only CE loss. +- **Omni-modal Live Streaming Mechanism.** (1) We change the offline modality encoder/decoders into online ones for **streaming inputs/outputs.** (2) We devise a **time-division multiplexing (TDM) mechanism** for omni-modality streaming processing in the LLM backbone. It divides parallel omni-modality streams into sequential info within small periodic time slices. +- **Configurable Speech Modeling Design.** We devise a multimodal system prompt, including traditional text system prompt, and **a new audio system prompt to determine the assistant voice**. This enables flexible voice configurations in inference time, and also facilitates end-to-end voice cloning and description-based voice creation. + +

+ +
+ + +### Evaluation + +
+ +
+ +
+Click to view visual understanding results. + +**Image Understanding** + +
+
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeToken Density+OpenCompassOCRBenchMathVista miniChartQAMMVetMMStarMMEMMB1.1 testAI2DMMMU valHallusionBenchTextVQA valDocVQA testMathVerse miniMathVisionMMHal Score
Proprietary
GPT-4o-20240513-108869.973661.385.769.163.92328.782.284.669.255.0-92.850.230.43.6
Claude3.5-Sonnet-75067.978861.690.866.062.21920.078.580.265.949.9-95.2--3.4
Gemini 1.5 Pro--64.475457.781.364.059.12110.673.979.160.645.673.586.5-19.2-
GPT-4o-mini-20240718-108864.178552.4-66.954.82003.476.077.860.046.1----3.3
Open Source
Cambrian-34B34B182058.359150.375.653.254.22049.977.879.550.441.676.775.5---
GLM-4V-9B13B78459.177651.1-58.054.82018.867.971.246.945.0-----
Pixtral-12B12B25661.068556.981.858.554.5-72.779.051.147.075.790.7---
VITA-1.58B78463.374166.2-52.760.22328.176.879.252.644.6-----
DeepSeek-VL2-27B (4B)27B67266.480963.986.060.061.92253.081.283.854.045.384.293.3--3.0
Qwen2-VL-7B8B78467.186658.283.062.060.72326.081.883.054.150.684.394.531.916.33.2
LLaVA-OneVision-72B72B18268.174167.583.760.665.82261.085.085.656.849.080.591.339.1-3.5
InternVL2.5-8B8B70668.382264.484.862.862.82344.083.684.556.050.179.193.039.519.73.4
MiniCPM-V 2.68B282265.2852*60.679.460.057.52348.4*78.082.149.8*48.1*80.190.825.718.33.6
MiniCPM-o 2.68B282270.2897*71.9*86.9*67.564.02372.0*80.585.850.4*51.982.093.541.4*23.1*3.8
+
+* We evaluate this benchmark using chain-of-thought prompting. Specifically, for MME, we used this technique only for the Cognition set. + + ++ Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens. + +Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation. + + +**Multi-image and Video Understanding** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeBLINK valMantis EvalMIRBVideo-MME (wo / w subs)
Proprietary
GPT-4o-20240513-68.0--71.9/77.2
GPT4V-54.662.753.159.9/63.3
Open-source
VITA-1.58B45.0--56.1/58.7
LLaVA-NeXT-Interleave 14B14B52.666.430.2-
LLaVA-OneVision-72B72B55.477.6-66.2/69.5
MANTIS 8B8B49.159.534.8-
Qwen2-VL-7B8B53.269.6*67.6*63.3/69.0
InternVL2.5-8B8B54.867.752.564.2/66.9
MiniCPM-V 2.68B53.069.153.860.9/63.6
MiniCPM-o 2.68B56.771.958.663.9/67.9
+ +
+* We evaluate officially released checkpoints by ourselves. + + + + +
+Click to view audio understanding and speech conversation results. + +**Audio Understanding** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TaskSizeASR (zh)ASR (en)ASTEmotion
MetricCER↓WER↓BLEU↑ACC↑
DatasetAISHELL-1Fleurs zhWenetSpeech test-netLibriSpeech test-cleanGigaSpeechTED-LIUMCoVoST en2zhCoVoST zh2enMELD emotion
Proprietary
GPT-4o-Realtime-7.3*5.4*28.9*2.6*12.9*4.8*37.1*15.7*33.2*
Gemini 1.5 Pro-4.5*5.9*14.3*2.9*10.6*3.0*47.3*22.6*48.4*
Open-Source
Qwen2-Audio-7B8B-7.5-1.6--45.224.455.3
Qwen2-Audio-7B-Instruct8B2.6*6.9*10.3*3.1*9.7*5.9*39.5*22.9*17.4*
VITA-1.58B2.16-8.43.4-----
GLM-4-Voice-Base9B2.5--2.8----
MiniCPM-o 2.68B1.64.46.91.78.73.048.227.252.4
+
+* We evaluate officially released checkpoints by ourselves.

+ +**Speech Generation** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TaskSizeSpeechQA
MetricACC↑G-Eval (10 point)↑Semantic ELO score↑Acoustic ELO score↑Overall ELO score↑UTMOS↑ASR-WER↓
DatasetSpeech Llama Q.Speech Web Q.Speech Trivia QASpeech AlpacaEvalAudioArena
Proprietary
GPT-4o-Realtime71.751.669.77.41157120312004.22.3
Open-Source
GLM-4-Voice9B50.032.036.45.1999114710354.111.7
Llama-Omni8B45.322.910.73.99608788973.224.3
VITA-1.58B46.728.123.32.0-----
Moshi7B43.723.816.72.48718088752.88.2
Mini-Omni1B22.012.86.92.59268038653.410.0
MiniCPM-o 2.68B61.040.040.25.11088116311314.29.8
+
+All results are from AudioEvals, and the evaluation methods along with further details can be found in AudioEvals.

+ +**End-to-end Voice Cloning** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TaskVoice cloning
MetricSIMO↑SIMO↑
DatasetSeed-TTS test-zhSeed-TTS test-en
F5-TTS7667
CosyVoice7564
FireRedTTS6346
MiniCPM-o 2.65747
+
+ +
+ +
+Click to view multimodal live streaming results. + +**Multimodal Live Streaming**: results on StreamingBench + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeReal-Time Video UnderstandingOmni-Source UnderstandingContextual UnderstandingOverall
Proprietary
Gemini 1.5 Pro-77.467.851.170.3
GPT-4o-202408-74.551.048.064.1
Claude-3.5-Sonnet-74.041.437.859.7
Open-source
VILA-1.58B61.537.526.749.5
LongVA7B63.135.930.250.7
LLaVA-Next-Video-34B34B69.841.734.356.7
Qwen2-VL-7B8B71.240.733.157.0
InternVL2-8B8B70.142.734.157.0
VITA-1.58B70.940.835.857.4
LLaVA-OneVision-7B8B74.340.831.058.4
InternLM-XC2.5-OL-7B8B75.446.233.660.8
MiniCPM-V 2.68B72.440.233.457.7
MiniCPM-o 2.68B79.953.438.566.0
+ +
+ + +### Examples + +We deploy MiniCPM-o 2.6 on end devices. The demo video is the raw-speed recording on an iPad Pro and a Web demo. + +
+ +
+ +
+ +
+ math + diagram + bike +
+ diff --git a/docs/minicpm_o2dot6_zh.md b/docs/minicpm_o2dot6_zh.md new file mode 100644 index 0000000..b86f557 --- /dev/null +++ b/docs/minicpm_o2dot6_zh.md @@ -0,0 +1,927 @@ +## MiniCPM-o 2.6 + +> Archieve at: 2026-02-02 + +MiniCPM-o 2.6 是 MiniCPM-o 系列的最新、性能最佳模型。该模型基于 SigLip-400M、Whisper-medium-300M、ChatTTS-200M 和 Qwen2.5-7B 构建,共 8B 参数,通过端到端方式训练和推理。相比 MiniCPM-V 2.6,该模型在性能上有了显著提升,并支持了实时语音对话和多模态流式交互的新功能。MiniCPM-o 2.6 的主要特性包括: + + +- 🔥 **领先的视觉能力。** +MiniCPM-o 2.6 在 OpenCompass 榜单上(综合 8 个主流多模态评测基准)平均得分 70.2,**以 8B 量级的大小在单图理解方面超越了 GPT-4o-202405、Gemini 1.5 Pro 和 Claude 3.5 Sonnet 等主流商用闭源多模态大模型**。此外,它的多图和视频理解表现也**优于 GPT-4V 和 Claude 3.5 Sonnet**,并展现出了优秀的上下文学习能力。 + +- 🎙 **出色的语音能力。** +MiniCPM-o 2.6 **支持可配置声音的中英双语实时对话**。MiniCPM-o 2.6 在语音理解任务(如 ASR 和 STT 等)**优于 GPT-4o-realtime**,并在语音对话的语义和声学评估中展现了**开源模型中最高的语音生成性能**。它还支持情绪/语速/风格控制、语音克隆、角色扮演等进阶能力。 + +- 🎬 **强大的多模态流式交互能力。** +作为一项新功能,MiniCPM-o 2.6 能够**接受连续的视频和音频流,并和用户进行实时语音交互**。在针对实时视频理解、全模态视音频理解、多模态上下文理解的综合评测基准 StreamingBench 中,MiniCPM-o 2.6 取得开源社区最佳水平,并**超过了 GPT-4o-202408 和 Claude 3.5 Sonnet**。 + +- 💪 **强大的 OCR 能力及其他功能。** +MiniCPM-o 2.6 进一步优化了 MiniCPM-V 2.6 的众多视觉理解能力,其可以处理任意长宽比的图像,像素数可达 180 万(如 1344x1344)。在 OCRBench 上取得**25B 以下最佳水平,超过 GPT-4o-202405 等商用闭源模型**。基于最新的 [RLHF-V](https://rlhf-v.github.io/)、[RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) 和 [VisCPM](https://github.com/OpenBMB/VisCPM) 技术,其具备了**可信的多模态行为**,在 MMHal-Bench 上超过了 GPT-4o 和 Claude 3.5,并支持英语、中文、德语、法语、意大利语、韩语等**30多种语言**。 + +- 🚀 **卓越的效率。** +除了对个人用户友好的模型大小,MiniCPM-o 2.6 还表现出**最先进的视觉 token 密度**(即每个视觉 token 编码的像素数量)。它**仅需 640 个 token 即可处理 180 万像素图像,比大多数模型少 75%**。这一特性优化了模型的推理速度、首 token 延迟、内存占用和功耗。因此,MiniCPM-o 2.6 可以支持 iPad 等终端设备上的高效**多模态实时流式交互**。 + + +- 💫 **易于使用。** +MiniCPM-o 2.6 可以通过多种方式轻松使用:(1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-omni/examples/llava/README-minicpmo2.6.md) 支持在本地设备上进行高效的 CPU 推理,(2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) 和 [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) 格式的量化模型,有 16 种尺寸,(3) [vLLM](#基于-llamacppollamavllm-的高效推理) 支持高吞吐量和内存高效的推理,(4) 通过[LLaMA-Factory](./docs/llamafactory_train_and_infer.md)框架针对新领域和任务进行微调,(5) 使用 [Gradio](#本地-webui-demo-) 快速设置本地 WebUI 演示,(6) 部署于服务器的在线 [demo](https://minicpm-omni-webdemo-us.modelbest.cn/)。 + +**模型架构。** + +- **端到端全模态架构。** 通过**端到端**的方式连接和训练不同模态的编/解码模块以充分利用丰富的多模态知识。模型完全使用 CE 损失端到端训练。 +- **全模态流式机制。** (1) 我们将不同模态的离线编/解码器改造为适用于**流式输入/输出**的在线模块。 (2) 我们针对大语言模型基座设计了**时分复用的全模态流式信息处理机制**,将平行的不同模态的信息流拆分重组为周期性时间片序列。 +- **可配置的声音方案。** 我们设计了新的多模态系统提示,包含传统文本系统提示词,和**用于指定模型声音的语音系统提示词**。模型可在推理时灵活地通过文字或语音样例控制声音风格,并支持端到端声音克隆和音色创建等高级能力。 + +
+ +
+ +
+ + + +### 性能评估 + +
+ +
+ +
+点击查看视觉理解能力详细评测结果。 + +**图像理解能力** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeToken Density+OpenCompassOCRBenchMathVista miniChartQAMMVetMMStarMMEMMB1.1 testAI2DMMMU valHallusionBenchTextVQA valDocVQA testMathVerse miniMathVisionMMHal Score
Proprietary
GPT-4o-20240513-108869.973661.385.769.163.92328.782.284.669.255.0-92.850.230.43.6
Claude3.5-Sonnet-75067.978861.690.866.062.21920.078.580.265.949.9-95.2--3.4
Gemini 1.5 Pro--64.475457.781.364.059.12110.673.979.160.645.673.586.5-19.2-
GPT-4o-mini-20240718-108864.178552.4-66.954.82003.476.077.860.046.1----3.3
Open Source
Cambrian-34B34B182058.359150.375.653.254.22049.977.879.550.441.676.775.5---
GLM-4V-9B13B78459.177651.1-58.054.82018.867.971.246.945.0-----
Pixtral-12B12B25661.068556.981.858.554.5-72.779.051.147.075.790.7---
DeepSeek-VL2-27B (4B)27B67266.480963.986.060.061.92253.081.283.854.045.384.293.3--3.0
Qwen2-VL-7B8B78467.186658.283.062.060.72326.081.883.054.150.684.394.531.916.33.2
LLaVA-OneVision-72B72B18268.174167.583.760.665.82261.085.085.656.849.080.591.339.1-3.5
InternVL2.5-8B8B70668.382264.484.862.862.82344.083.684.556.050.179.193.039.519.73.4
MiniCPM-V 2.68B282265.2852*60.679.460.057.52348.4*78.082.149.8*48.1*80.190.825.718.33.6
MiniCPM-o 2.68B282270.2897*71.9*86.9*67.564.02372.0*80.585.850.4*51.982.093.541.4*23.1*3.8
+
+* 我们使用思维链提示词来评估这些基准,对于 MME 我们只在 Cognition 任务上使用了思维链。 ++ Token Density:每个视觉 token 在最大分辨率下编码的像素数,即最大分辨率下的像素数 / 视觉 token 数。 + +注意:闭源模型的 Token Density 由 API 收费方式估算得到。 + +**多图和视频理解能力** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeBLINK valMantis EvalMIRBVideo-MME (wo / w subs)
Proprietary
GPT-4o-20240513-68--71.9/77.2
GPT4V-54.662.753.159.9/63.3
Open-source
LLaVA-NeXT-Interleave 14B14B52.666.430.2-
LLaVA-OneVision-72B72B55.477.6-66.2/69.5
MANTIS 8B8B49.159.534.8-
Qwen2-VL-7B8B53.269.6*67.6*63.3/69.0
InternVL2.5-8B8B54.867.752.564.2/66.9
MiniCPM-V 2.68B5369.153.860.9/63.6
MiniCPM-o 2.68B56.771.958.663.9/67.9
+ +
+* 正式开源模型权重的评测结果。 + +
+ + +
+点击查看语音理解和生成能力的详细评测结果。 + +**语音理解能力** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TaskSizeASR (zh)ASR (en)ASTEmotion
MetricCER↓WER↓BLEU↑ACC↑
DatasetAISHELL-1Fleurs zhWenetSpeech test-netLibriSpeech test-cleanGigaSpeechTED-LIUMCoVoST en2zhCoVoST zh2enMELD emotion
Proprietary
GPT-4o-Realtime-7.3*5.4*28.9*2.6*12.9*4.8*37.1*15.7*33.2*
Gemini 1.5 Pro-4.5*5.9*14.3*2.9*10.6*3.0*47.3*22.6*48.4*
Open-Source
Qwen2-Audio-7B8B-7.5-1.6--45.224.455.3
Qwen2-Audio-7B-Instruct8B2.6*6.9*10.3*3.1*9.7*5.9*39.5*22.9*17.4*
GLM-4-Voice-Base9B2.5--2.8----
MiniCPM-o 2.68B1.64.46.91.78.73.048.227.252.4
+
+* 正式开源模型权重的评测结果。

+ +**语音生成能力。** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TaskSizeSpeechQA
MetricACC↑G-Eval (10 point)↑Semantic ELO score↑Acoustic ELO score↑Overall ELO score↑UTMOS↑ASR-WER↓
DatasetSpeech Llama Q.Speech Web Q.Speech Trivia QASpeech AlpacaEvalAudioArena
Proprietary
GPT-4o-Realtime71.751.669.77.41157120312004.22.3
Open-Source
GLM-4-Voice9B50.032.036.45.1999114710354.111.7
Llama-Omni8B45.322.910.73.99608788973.224.3
VITA-1.58B46.728.123.32.0-----
Moshi7B43.723.816.72.48718088752.88.2
Mini-Omni1B22.012.86.92.59268038653.410.0
MiniCPM-o 2.68B61.040.040.25.11088116311314.29.8
+
+所有的结果都基于 AudioEvals

+ +**端到端声音克隆能力。** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TaskTTS
MetricSIMO↑SIMO↑
DatasetSeed-TTS test-zhSeed-TTS test-en
F5-TTS7667
CosyVoice7564
FireRedTTS6346
MiniCPM-o 2.65747
+
+ +
+ +
+点击查看多模态流式交互能力评测详细结果。 + +**多模态流式交互能力**: StreamingBench 分数 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeReal-Time Video UnderstandingOmni-Source UnderstandingContextual UnderstandingOverall
Proprietary
Gemini 1.5 Pro-77.467.851.170.3
GPT-4o-202408-74.551.048.064.1
Claude-3.5-Sonnet-74.041.437.859.7
Open-source
VILA-1.58B61.537.526.749.5
LongVA7B63.135.930.250.7
LLaVA-Next-Video-34B34B69.841.734.356.7
Qwen2-VL-7B8B71.240.733.157.0
InternVL2-8B8B70.142.734.157.0
VITA-1.58B70.940.835.857.4
LLaVA-OneVision-7B8B74.340.831.058.4
InternLM-XC2.5-OL-7B8B75.446.233.660.8
MiniCPM-V 2.68B72.440.233.457.7
MiniCPM-o 2.68B79.953.438.566.0
+ +
+ + +### 典型示例 + +以下为 MiniCPM-o 2.6 的 iPad Pro 实机演示和 web demo 演示样例: + + +
+ +
+
+ + + +
+ math + diagram + bike +
+ + diff --git a/docs/minicpm_v4dot5_en.md b/docs/minicpm_v4dot5_en.md new file mode 100644 index 0000000..851eb82 --- /dev/null +++ b/docs/minicpm_v4dot5_en.md @@ -0,0 +1,158 @@ +## MiniCPM-V 4.5 + +> Archieve at: 2026-02-03 + +**MiniCPM-V 4.5** is the latest and most capable model in the MiniCPM-V series. The model is built on Qwen3-8B and SigLIP2-400M with a total of 8B parameters. It exhibits a significant performance improvement over previous MiniCPM-V and MiniCPM-o models, and introduces new useful features. Notable features of MiniCPM-V 4.5 include: + +- 🔥 **State-of-the-art Vision-Language Capability.** + MiniCPM-V 4.5 achieves an average score of 77.0 on OpenCompass, a comprehensive evaluation of 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o-latest, Gemini-2.0 Pro, and strong open-source models like Qwen2.5-VL 72B** for vision-language capabilities, making it the most performant MLLM under 30B parameters. + +- 🎬 **Efficient High-FPS and Long Video Understanding.** Powered by a new unified 3D-Resampler over images and videos, MiniCPM-V 4.5 can now achieve 96x compression rate for video tokens, where 6 448x448 video frames can be jointly compressed into 64 video tokens (normally 1,536 tokens for most MLLMs). This means that the model can perceive significantly more video frames without increasing the LLM inference cost. This brings state-of-the-art high-FPS (up to 10FPS) video understanding and long video understanding capabilities on Video-MME, LVBench, MLVU, MotionBench, FavorBench, etc., efficiently. + +- ⚙️ **Controllable Hybrid Fast/Deep Thinking.** MiniCPM-V 4.5 supports both fast thinking for efficient frequent usage with competitive performance, and deep thinking for more complex problem solving. To cover efficiency and performance trade-offs in different user scenarios, this fast/deep thinking mode can be switched in a highly controlled fashion. + +- 💪 **Strong OCR, Document Parsing and Others.** +Based on [LLaVA-UHD](https://arxiv.org/pdf/2403.11703) architecture, MiniCPM-V 4.5 can process high-resolution images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), using 4x fewer visual tokens than most MLLMs. The model achieves **leading performance on OCRBench, surpassing proprietary models such as GPT-4o-latest and Gemini 2.5**. It also achieves state-of-the-art performance for PDF document parsing capability on OmniDocBench among general MLLMs. Based on the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, outperforming GPT-4o-latest on MMHal-Bench, and supports **multilingual capabilities** in more than 30 languages. + + +- 💫 **Easy Usage.** +MiniCPM-V 4.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/tc-mb/llama.cpp/blob/Support-MiniCPM-V-4.5/docs/multimodal/minicpmv4.5.md) and [ollama](https://github.com/tc-mb/ollama/tree/MIniCPM-V) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4), [GGUF](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf) and [AWQ](https://github.com/tc-mb/AutoAWQ) format quantized models in 16 sizes, (3) [SGLang](https://github.com/tc-mb/sglang/tree/main) and [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks with [Transformers](https://github.com/tc-mb/transformers/tree/main) and [LLaMA-Factory](./docs/llamafactory_train_and_infer.md), (5) quick [local WebUI demo](#chat-with-our-demo-on-gradio), (6) optimized [local iOS app](https://github.com/tc-mb/MiniCPM-o-demo-iOS) on iPhone and iPad, and (7) online web demo on [server](http://101.126.42.235:30910/). See our [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook) for full usage! + + +### Key Techniques + + +
+ +
+ +- **Architechture: Unified 3D-Resampler for High-density Video Compression.** MiniCPM-V 4.5 introduces a 3D-Resampler that overcomes the performance-efficiency trade-off in video understanding. By grouping and jointly compressing up to 6 consecutive video frames into just 64 tokens (the same token count used for a single image in MiniCPM-V series), MiniCPM-V 4.5 achieves a 96× compression rate for video tokens. This allows the model to process more video frames without additional LLM computational cost, enabling high-FPS video and long video understanding. The architecture supports unified encoding for images, multi-image inputs, and videos, ensuring seamless capability and knowledge transfer. + +- **Pre-training: Unified Learning for OCR and Knowledge from Documents.** Existing MLLMs learn OCR capability and knowledge from documents in isolated training approaches. We observe that the essential difference between these two training approaches is the visibility of the text in images. By dynamically corrupting text regions in documents with varying noise levels and asking the model to reconstruct the text, the model learns to adaptively and properly switch between accurate text recognition (when text is visible) and multimodal context-based knowledge reasoning (when text is heavily obscured). This eliminates reliance on error-prone document parsers in knowledge learning from documents, and prevents hallucinations from over-augmented OCR data, resulting in top-tier OCR and multimodal knowledge performance with minimal engineering overhead. + +- **Post-training: Hybrid Fast/Deep Thinking with Multimodal RL.** MiniCPM-V 4.5 offers a balanced reasoning experience through two switchable modes: fast thinking for efficient daily use and deep thinking for complex tasks. Using a new hybrid reinforcement learning method, the model jointly optimizes both modes, significantly enhancing fast-mode performance without compromising deep-mode capability. Incorporated with [RLPR](https://github.com/OpenBMB/RLPR) and [RLAIF-V](https://github.com/RLHF-V/RLAIF-V), it generalizes robust reasoning skills from broad multimodal data while effectively reducing hallucinations. + +### Evaluation + +
+ +
+
+ +
+ + +### Inference Efficiency + + +**OpenCompass** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeAvg Score ↑Total Inference Time ↓
GLM-4.1V-9B-Thinking10.3B76.617.5h
MiMo-VL-7B-RL8.3B76.411h
MiniCPM-V 4.58.7B77.07.5h
+
+ +**Video-MME** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeAvg Score ↑Total Inference Time ↓GPU Mem ↓
Qwen2.5-VL-7B-Instruct8.3B71.63h60G
GLM-4.1V-9B-Thinking10.3B73.62.63h32G
MiniCPM-V 4.58.7B73.50.26h28G
+
+ +Both Video-MME and OpenCompass were evaluated using 8×A100 GPUs for inference. The reported inference time of Video-MME includes full model-side computation, and excludes the external cost of video frame extraction (dependent on specific frame extraction tools) for fair comparison. + + +### Examples + +
+ +
+ +
+ en_case1 + en_case2 + en_case3 +
+ +
+Click to view more cases. +
+ zh_extra +
+ +
+ +We deploy MiniCPM-V 4.5 on iPad M4 with [iOS demo](https://github.com/tc-mb/MiniCPM-o-demo-iOS). The demo video is the raw screen recording without edition. + + +

+ +      + +

+

+ +      + +

+
+ diff --git a/docs/minicpm_v4dot5_zh.md b/docs/minicpm_v4dot5_zh.md new file mode 100644 index 0000000..94c1cee --- /dev/null +++ b/docs/minicpm_v4dot5_zh.md @@ -0,0 +1,156 @@ +## MiniCPM-V 4.5 + +> Archieve at: 2026-02-03 + +**MiniCPM-V 4.5** 是 MiniCPM-V 系列中最新、最强大的模型。该模型基于 Qwen3-8B 与 SigLIP2-400M 构建,总参数量为 8B。其在性能上较前代 MiniCPM-V 与 MiniCPM-o 有显著提升,并引入了一系列全新的实用特性。其主要亮点包括: + + +- 🔥 **领先的视觉理解能力** + MiniCPM-V 4.5 在 OpenCompass 综合评测(涵盖 8 个主流评测基准)中取得了 77.0 的高分。**在仅 8B 参数的情况下超越了广泛使用的闭源模型(如 GPT-4o-latest、Gemini-2.0 Pro)以及强大的开源模型(如 Qwen2.5-VL 72B)**,成为 30B 参数规模以下最强的多模态大模型。 + +- 🎬 **高效的高帧率与长视频理解** + 借助全新的图像-视频统一 3D-Resampler,MiniCPM-V 4.5 能够实现 96 倍视频 token 压缩率,即将 6 帧 448x448 视频帧联合压缩为 64 个 token(大多数多模态大模型需约 1536 个 token)。这意味着模型在语言模型推理成本不增加的情况下,可以感知显著更多的视频帧,从而实现业界领先的 高帧率(最高 10FPS)视频理解与长视频理解,并在 Video-MME、LVBench、MLVU、MotionBench、FavorBench 等基准上高效率地展现出色性能。 + +- ⚙️ **可控的快思考 / 深思考模式** + MiniCPM-V 4.5 同时支持 快思考(用于高频高效推理,性能具竞争力)与 深思考(用于复杂问题求解)。用户可根据不同场景对效率与性能的权衡,自由切换两种模式,实现高度可控的推理过程。 + +- 💪 **优秀的 OCR、文档解析与多语言能力** + 基于 [LLaVA-UHD](https://arxiv.org/pdf/2403.11703) 架构,MiniCPM-V 4.5 能处理任意长宽比、最高达 180 万像素(如 1344x1344) 的高分辨率图像,同时使用的视觉 token 数仅为多数 MLLM 的 1/4。其在 OCRBench 上取得超越 GPT-4o-latest 与 Gemini 2.5 等闭源模型的性能,并在 OmniDocBench 上展现了业界顶尖的 PDF 文档解析能力。借助最新的 [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) 和 [VisCPM](https://github.com/OpenBMB/VisCPM) 技术,模型在可靠性上表现优异,在 MMHal-Bench 上超越 GPT-4o-latest,并支持 30+ 种语言的多语言能力。 + +- 💫 **便捷易用的部署方式** + MiniCPM-V 4.5 提供丰富灵活的使用方式:(1) [llama.cpp](https://github.com/tc-mb/llama.cpp/blob/master/docs/multimodal/minicpmo4.5.md) 与 [ollama](https://github.com/tc-mb/ollama/tree/MIniCPM-V) 支持本地 CPU 高效推理;(2) 提供 [int4](https://huggingface.co/openbmb/MiniCPM-V-4_5-int4)、[GGUF](https://huggingface.co/openbmb/MiniCPM-V-4_5-gguf)、[AWQ](https://github.com/tc-mb/AutoAWQ) 等 16 种规格的量化模型;(3)兼容 SGLang 与 [vLLM](#efficient-inference-with-llamacpp-ollama-vllm) (4) 借助 [Transformers](https://github.com/tc-mb/transformers/tree/main) 与 [LLaMA-Factory](./docs/llamafactory_train_and_infer.md) 在新领域与任务上进行微调;(5) 快速启动本地 [WebUI demo](#chat-with-our-demo-on-gradio);(6) 优化适配的 [iOS 本地应用](https://github.com/tc-mb/MiniCPM-o-demo-iOS),可在 iPhone 与 iPad 上高效运行;(7) 在线 [Web demo](http://101.126.42.235:30910/) 体验。更多使用方式请见 [Cookbook](https://github.com/OpenSQZ/MiniCPM-V-CookBook)。 + +### 技术亮点 + +- **架构:图像-视频统一的高密度视觉压缩 3D-Resampler**。 MiniCPM-V 4.5 在架构上引入了 3D-Resampler,成功突破了视频理解任务中性能与效率难以兼得的瓶颈。该方法能够将多达 6 帧连续视频帧压缩为仅 64 个 token(与 MiniCPM-V 系列中单张图像所用的 token 数相同),从而实现 96× 的视频 token 压缩率。这使得模型在语言模型计算成本不增加的情况下,可以处理更多的视频帧,从而实现高帧率视频理解和长视频理解。该架构统一支持单图、多图和视频的编码处理,确保了能力与知识的无缝迁移。 + +- **学习机制:OCR与文档知识的统一学习**。现有多模态大模型一般在不同训练阶段分别单独训练 OCR 能力与文档知识。我们发现这两个训练过程的本质差异在于图像中文本的可见性。通过动态对文档文本区域施加不同强度的噪声干扰,并要求模型重建文本,使其学会自适应地在准确文本识别(当文本清晰时)与基于多模态上下文的知识推理(当文本严重遮挡时)之间切换。这种方法使得 MiniCPM-V 在文档知识学习中摆脱了对高错误率的文档解析器的依赖,同时避免了过度增强的 OCR 数据产生的幻觉问题,以最小工程开销实现了顶尖的 OCR 与多模态知识处理性能。 + +- **后训练优化:基于多模态强化学习的混合快思考/深度思考模式**。 MiniCPM-V 4.5 通过两种可切换推理模式提供均衡的体验:面向高效日常应用的快速思考模式,以及处理复杂任务的深度思考模式。采用新颖的混合强化学习方法,模型可联合优化两种模式,在保持深度模式能力的同时显著提升快速模式性能。结合 [RLPR](https://github.com/OpenBMB/RLPR) 和 [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) 技术,该模型可以从海量多模态数据中泛化出强大的推理能力,并有效减少幻觉现象。 + +
+ +
+ +### 性能评估 + +
+ +
+
+ +
+ + +### 推理效率 + + +**OpenCompass** +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeAvg Score ↑Total Inference Time ↓
GLM-4.1V-9B-Thinking10.3B76.617.5h
MiMo-VL-7B-RL8.3B76.411h
MiniCPM-V 4.58.7B77.07.5h
+
+ +**Video-MME** + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeAvg Score ↑Total Inference Time ↓GPU Mem ↓
Qwen2.5-VL-7B-Instruct8.3B71.63h60G
GLM-4.1V-9B-Thinking10.3B73.62.63h32G
MiniCPM-V 4.58.7B73.50.26h28G
+ + +OpenCompass 和 Video-MME 均采用 A100*8卡 推理,其中 Video-MME 的推理时间未统计视频抽帧时间 + +### 典型示例 +
+ +
+ +
+ zh_case1 + zh_case2 +
+ +
+点击查看更多示例 +
+ en_extra + en_extra +
+
+ + +我们使用 [iOS demo](https://github.com/tc-mb/MiniCPM-o-demo-iOS) 将 MiniCPM-V 4.5 部署在 iPad M4 ,并录制以下演示录屏,视频未经任何编辑。 + + +

+ +      + +

+

+ +      + +

+
+