diff --git a/README.md b/README.md
index 5688603..7728c2c 100644
--- a/README.md
+++ b/README.md
@@ -2,7 +2,7 @@
-**A GPT-4V Level Multimodal LLM on Your Phone**
+**A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone**
[中文](./README_zh.md) |
English
@@ -11,17 +11,16 @@ Join our 💬 WeChat
- MiniCPM-Llama3-V 2.5 🤗 🤖 |
- MiniCPM-V 2.0 🤗 🤖 |
+ MiniCPM-V 2.6 🤗 🤖 | MiniCPM-Llama3-V 2.5 🤗 🤖 |
MiniCPM-Llama3-V 2.5 Technical Report
-**MiniCPM-V** is a series of end-side multimodal LLMs (MLLMs) designed for vision-language understanding. The models take image and text as inputs and provide high-quality text outputs. Since February 2024, we have released 4 versions of the model, aiming to achieve **strong performance and efficient deployment**. The most notable models in this series currently include:
+**MiniCPM-V** is a series of end-side multimodal LLMs (MLLMs) designed for vision-language understanding. The models take image, video and text as inputs and provide high-quality text outputs. Since February 2024, we have released 5 versions of the model, aiming to achieve **strong performance and efficient deployment**. The most notable models in this series currently include:
-- **MiniCPM-Llama3-V 2.5**: 🔥🔥🔥 The latest and most capable model in the MiniCPM-V series. With a total of 8B parameters, the model **surpasses proprietary models such as GPT-4V-1106, Gemini Pro, Qwen-VL-Max and Claude 3** in overall performance. Equipped with the enhanced OCR and instruction-following capability, the model can also support multimodal conversation for **over 30 languages** including English, Chinese, French, Spanish, German etc. With help of quantization, compilation optimizations, and several efficient inference techniques on CPUs and NPUs, MiniCPM-Llama3-V 2.5 can be **efficiently deployed on end-side devices**.
+- **MiniCPM-V 2.6**: 🔥🔥🔥 The latest and most capable model in the MiniCPM-V series. With a total of 8B parameters, the model **surpasses GPT-4V in single image, multi-image and video understanding**. It outperforms **GPT-4o mini, Gemini 1.5 Pro and Claude 3.5 Sonnet** in single image understanding, and advances MiniCPM-Llama3-V 2.5's features such as strong OCR capability, trustworthy behavior, multilingual support, and end-side deployment. Due to its superior token density, MiniCPM-V 2.6 can for the first time support real-time video understanding on end-side devices such as iPad.
- **MiniCPM-V 2.0**: The lightest model in the MiniCPM-V series. With 2B parameters, it surpasses larger models such as Yi-VL 34B, CogVLM-Chat 17B, and Qwen-VL-Chat 10B in overall performance. It can accept image inputs of any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), achieving comparable performance with Gemini Pro in understanding scene-text and matches GPT-4V in low hallucination rates.
@@ -29,6 +28,7 @@ Join our 💬 WeChat
## News
#### 📌 Pinned
+* [2024.08.06] 🔥🔥🔥 We open-source MiniCPM-V 2.6, which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now!
* [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See [here](./docs/MiniCPM_Llama3_V_25_technical_report.pdf).
* [2024.07.19] MiniCPM-Llama3-V 2.5 supports vLLM now! See [here](#vllm).
* [2024.05.28] 🚀🚀🚀 MiniCPM-Llama3-V 2.5 now fully supports its feature in llama.cpp and ollama! Please pull the latest code **of our provided forks** ([llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md), [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5)). GGUF models in various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/tree/main). MiniCPM-Llama3-V 2.5 series is **not supported by the official repositories yet**, and we are working hard to merge PRs. Please stay tuned!
@@ -38,6 +38,9 @@ Join our 💬 WeChat
+
+Click to view more news.
+
* [2024.06.03] Now, you can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs(12 GB or 16 GB) by distributing the model's layers across multiple GPUs. For more details, Check this [link](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md).
* [2024.05.25] MiniCPM-Llama3-V 2.5 now supports streaming outputs and customized system prompts. Try it [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage)!
* [2024.05.24] We release the MiniCPM-Llama3-V 2.5 [gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](#inference-with-llamacpp) inference and provides a 6~8 token/s smooth decoding on mobile phones. Try it now!
@@ -50,11 +53,13 @@ Join our 💬 WeChat
* [2024.03.14] MiniCPM-V now supports [fine-tuning](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md) with the SWIFT framework. Thanks to [Jintao](https://github.com/Jintao-Huang) for the contribution!
* [2024.03.01] MiniCPM-V now can be deployed on Mac!
* [2024.02.01] We open-source MiniCPM-V and OmniLMM-12B, which support efficient end-side deployment and powerful multimodal capabilities correspondingly.
+
## Contents
+- [MiniCPM-V 2.6](#minicpm-v-26)
- [MiniCPM-Llama3-V 2.5](#minicpm-llama3-v-25)
- [MiniCPM-V 2.0](#minicpm-v-20)
- [Chat with Our Demo on Gradio](#chat-with-our-demo-on-gradio)
@@ -62,18 +67,789 @@ Join our 💬 WeChat
- [Inference](#inference)
- [Model Zoo](#model-zoo)
- [Multi-turn Conversation](#multi-turn-conversation)
+ - [Chat with multiple images](#chat-with-multiple-images)
+ - [In-context few-shot learning](#in-context-few-shot-learning)
+ - [Chat with video](#chat-with-video)
- [Inference on Multiple GPUs](#inference-on-multiple-gpus)
- [Inference on Mac](#inference-on-mac)
- [Deployment on Mobile Phone](#deployment-on-mobile-phone)
- [Inference with llama.cpp](#inference-with-llamacpp)
+ - [Inference with ollama](#inference-with-ollama)
- [Inference with vLLM](#inference-with-vllm)
- [Fine-tuning](#fine-tuning)
-- [TODO](#todo)
-- [🌟 Star History](#-star-history)
-- [Citation](#citation)
+- [FAQs](#faqs)
+
+
+## MiniCPM-V 2.6
+
+**MiniCPM-V 2.6** is the latest and most capable model in the MiniCPM-V series. The model is built on SigLip-400M and Qwen2-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-Llama3-V 2.5, and introduces new features for multi-image and video understanding. Notable features of MiniCPM-V 2.6 include:
+
+- 🔥 **Leading Performance.**
+ MiniCPM-V 2.6 achieves an average score of 65.2 on the latest version of OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o mini, GPT-4V, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding.
+
+- 🖼️ **Multi Image Understanding and In-context Learning.** MiniCPM-V 2.6 can also perform **conversation and reasoning over multiple images**. It achieves **state-of-the-art performance** on popular multi-image benchmarks such as Mantis-Eval, BLINK, Mathverse mv and Sciverse mv, and also shows promising in-context learning capability.
+
+- 🎬 **Video Understanding.** MiniCPM-V 2.6 can also **accept video inputs**, performing conversation and providing dense captions for spatial-temporal information. It outperforms **GPT-4V, Claude 3.5 Sonnet and LLaVA-NeXT-Video-34B** on Video-MME with/without subtitles.
+
+- 💪 **Strong OCR Capability and Others.**
+ MiniCPM-V 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V, and Gemini 1.5 Pro**.
+ Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, with significantly lower hallucination rates than GPT-4o and GPT-4V on Object HalBench, and supports **multilingual capabilities** on English, Chiense, German, French, Italian, Korean, etc.
+
+
+- 🚀 **Superior Efficiency.**
+ In addition to its friendly size, MiniCPM-V 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-V 2.6 can efficiently support **real-time video understanding** on end-side devices such as iPad.
+
+- 💫 **Easy Usage.**
+MiniCPM-V 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpmv-main/examples/llava/README-minicpmv2.6.md) and [ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#inference-with-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks, (5) quick local WebUI demo setup with [Gradio](#chat-with-our-demo-on-gradio), and (6) online web [demo](http://120.92.209.146:8887/).
+
+### Evaluation
+
+

+
+
+
+Click to view single image results on OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench.
+
+
+
+
+
+ | Model |
+ Size |
+ Token Density+ |
+ OpenCompass |
+ MME |
+ MMVet |
+ OCRBench |
+ MMMU val |
+ MathVista mini |
+ MMB1.1 test |
+ AI2D |
+ TextVQA val |
+ DocVQA test |
+ HallusionBench |
+ Object HalBench |
+
+
+
+
+ | Proprietary |
+
+
+ | GPT-4o |
+ - |
+ 1088 |
+ 69.9 |
+ 2328.7 |
+ 69.1 |
+ 736 |
+ 69.2 |
+ 61.3 |
+ 82.2 |
+ 84.6 |
+ - |
+ 92.8 |
+ 55.0 |
+ 17.6 |
+
+
+ | Claude 3.5 Sonnet |
+ - |
+ 750 |
+ 67.9 |
+ 1920.0 |
+ 66.0 |
+ 788 |
+ 65.9 |
+ 61.6 |
+ 78.5 |
+ 80.2 |
+ - |
+ 95.2 |
+ 49.9 |
+ 13.8 |
+
+
+ | Gemini 1.5 Pro |
+ - |
+ - |
+ 64.4 |
+ 2110.6 |
+ 64.0 |
+ 754 |
+ 60.6 |
+ 57.7 |
+ 73.9 |
+ 79.1 |
+ 73.5 |
+ 86.5 |
+ 45.6 |
+ - |
+
+
+ | GPT-4o mini |
+ - |
+ 1088 |
+ 64.1 |
+ 2003.4 |
+ 66.9 |
+ 785 |
+ 60.0 |
+ 52.4 |
+ 76.0 |
+ 77.8 |
+ - |
+ - |
+ 46.1 |
+ 12.4 |
+
+
+ | GPT-4V |
+ - |
+ 1088 |
+ 63.5 |
+ 2070.2 |
+ 67.5 |
+ 656 |
+ 61.7 |
+ 54.7 |
+ 79.8 |
+ 78.6 |
+ 78.0 |
+ 87.2 |
+ 43.9 |
+ 14.2 |
+
+
+ | Step-1V |
+ - |
+ - |
+ 59.5 |
+ 2206.4 |
+ 63.3 |
+ 625 |
+ 49.9 |
+ 44.8 |
+ 78.0 |
+ 79.2 |
+ 71.6 |
+ - |
+ 48.4 |
+ - |
+
+
+ | Qwen-VL-Max |
+ - |
+ 784 |
+ 58.3 |
+ 2281.7 |
+ 61.8 |
+ 684 |
+ 52.0 |
+ 43.4 |
+ 74.6 |
+ 75.7 |
+ 79.5 |
+ 93.1 |
+ 41.2 |
+ 13.4 |
+
+
+ | Open-source |
+
+
+ | LLaVA-NeXT-Yi-34B |
+ 34B |
+ 157 |
+ 55.0 |
+ 2006.5 |
+ 50.7 |
+ 574 |
+ 48.8 |
+ 40.4 |
+ 77.8 |
+ 78.9 |
+ 69.3 |
+ - |
+ 34.8 |
+ 12.6 |
+
+
+ | Mini-Gemini-HD-34B |
+ 34B |
+ 157 |
+ - |
+ 2141 |
+ 59.3 |
+ 518 |
+ 48.0 |
+ 43.3 |
+ - |
+ 80.5 |
+ 74.1 |
+ 78.9 |
+ - |
+ - |
+
+
+ | Cambrian-34B |
+ 34B |
+ 1820 |
+ 58.3 |
+ 2049.9 |
+ 53.2 |
+ 591 |
+ 50.4 |
+ 50.3 |
+ 77.8 |
+ 79.5 |
+ 76.7 |
+ 75.5 |
+ 41.6 |
+ 14.7 |
+
+
+ | GLM-4V-9B |
+ 13B |
+ 784 |
+ 59.1 |
+ 2018.8 |
+ 58.0 |
+ 776 |
+ 46.9 |
+ 51.1 |
+ 67.9 |
+ 71.2 |
+ - |
+ - |
+ 45.0 |
+ - |
+
+
+ | InternVL2-8B |
+ 8B |
+ 706 |
+ 64.1 |
+ 2215.1 |
+ 54.3 |
+ 794 |
+ 51.2 |
+ 58.3 |
+ 79.4 |
+ 83.6 |
+ 77.4 |
+ 91.6 |
+ 45.0 |
+ 21.3 |
+
+
+ | MiniCPM-Llama-V 2.5 |
+ 8B |
+ 1882 |
+ 58.8 |
+ 2024.6 |
+ 52.8 |
+ 725 |
+ 45.8 |
+ 54.3 |
+ 72.0 |
+ 78.4 |
+ 76.6 |
+ 84.8 |
+ 42.4 |
+ 10.3 |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 2822 |
+ 65.2 |
+ 2348.4* |
+ 60.0 |
+ 852* |
+ 49.8* |
+ 60.6 |
+ 78.0 |
+ 82.1 |
+ 80.1 |
+ 90.8 |
+ 48.1* |
+ 8.2 |
+
+
+
+
+
+* We evaluate this benchmark using chain-of-thought prompting.
+
++ Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens.
+
+Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation.
+
+
+
+
+
+Click to view multi-image results on Mantis Eval, BLINK, Mathverse mv, Sciverse mv, MIRB.
+
+
+
+
+
+ | Model |
+ Size |
+ Mantis Eval |
+ BLINK val |
+ Mathverse mv |
+ Sciverse mv |
+ MIRB |
+
+
+
+
+ | Proprietary |
+
+
+ | GPT-4V |
+ - |
+ 62.7 |
+ 54.6 |
+ 60.3 |
+ 66.9 |
+ 53.1 |
+
+
+ | LLaVA-NeXT-Interleave-14B |
+ 14B |
+ 66.4 |
+ 52.6 |
+ 32.7 |
+ 30.2 |
+ - |
+
+
+ | Open-source |
+
+
+ | Emu2-Chat |
+ 37B |
+ 37.8 |
+ 36.2 |
+ - |
+ 27.2 |
+ - |
+
+
+ | CogVLM |
+ 17B |
+ 45.2 |
+ 41.1 |
+ - |
+ - |
+ - |
+
+
+ | VPG-C |
+ 7B |
+ 52.4 |
+ 43.1 |
+ 24.3 |
+ 23.1 |
+ - |
+
+
+ | VILA 8B |
+ 8B |
+ 51.2 |
+ 39.3 |
+ - |
+ 36.5 |
+ - |
+
+
+ | InternLM-XComposer-2.5 |
+ 8B |
+ 53.1* |
+ 48.9 |
+ 32.1* |
+ - |
+ 42.5 |
+
+
+ | InternVL2-8B |
+ 8B |
+ 59.0* |
+ 50.9 |
+ 30.5* |
+ 34.4* |
+ 56.9* |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 69.1 |
+ 53.0 |
+ 84.9 |
+ 74.9 |
+ 53.8 |
+
+
+
+
+
+* We evaluate the officially released checkpoint by ourselves.
+
+
+
+Click to view video results on Video-MME and Video-ChatGPT.
+
+
+
+
+ | Model |
+ Size |
+ Video-MME |
+ Video-ChatGPT |
+
+
+ |
+ |
+ w/o subs |
+ w subs |
+ Correctness |
+ Detail |
+ Context |
+ Temporal |
+ Consistency |
+
+
+
+
+ | Proprietary |
+
+
+ | Claude 3.5 Sonnet |
+ - |
+ 60.0 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | GPT-4V |
+ - |
+ 59.9 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | Open-source |
+
+
+ | LLaVA-NeXT-7B |
+ 7B |
+ - |
+ - |
+ 3.39 |
+ 3.29 |
+ 3.92 |
+ 2.60 |
+ 3.12 |
+
+
+ | LLaVA-NeXT-34B |
+ 34B |
+ - |
+ - |
+ 3.29 |
+ 3.23 |
+ 3.83 |
+ 2.51 |
+ 3.47 |
+
+
+ | CogVLM2-Video |
+ 12B |
+ - |
+ - |
+ 3.49 |
+ 3.46 |
+ 3.23 |
+ 2.98 |
+ 3.64 |
+
+
+ | LongVA |
+ 7B |
+ 52.4 |
+ 54.3 |
+ 3.05 |
+ 3.09 |
+ 3.77 |
+ 2.44 |
+ 3.64 |
+
+
+ | InternVL2-8B |
+ 8B |
+ 54.0 |
+ 56.9 |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | InternLM-XComposer-2.5 |
+ 8B |
+ 55.8 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | LLaVA-NeXT-Video |
+ 32B |
+ 60.2 |
+ 63.0 |
+ 3.48 |
+ 3.37 |
+ 3.95 |
+ 2.64 |
+ 3.28 |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 60.9 |
+ 63.6 |
+ 3.59 |
+ 3.28 |
+ 3.93 |
+ 2.73 |
+ 3.62 |
+
+
+
+
+
+
+
+
+Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA.
+
+
+
+
+ | Model |
+ Size |
+ Shot |
+ TextVQA val |
+ VizWiz test-dev |
+ VQAv2 test-dev |
+ OK-VQA val |
+
+
+
+
+ | Flamingo |
+ 80B |
+ 0* |
+ 35.0 |
+ 31.6 |
+ 56.3 |
+ 40.6 |
+
+
+ | 4 |
+ 36.5 |
+ 39.6 |
+ 63.1 |
+ 57.4 |
+
+
+ | 8 |
+ 37.3 |
+ 44.8 |
+ 65.6 |
+ 57.5 |
+
+
+ | IDEFICS |
+ 80B |
+ 0* |
+ 30.9 |
+ 36.0 |
+ 60.0 |
+ 45.2 |
+
+
+ | 4 |
+ 34.3 |
+ 40.4 |
+ 63.6 |
+ 52.4 |
+
+
+ | 8 |
+ 35.7 |
+ 46.1 |
+ 64.8 |
+ 55.1 |
+
+
+ | OmniCorpus |
+ 7B |
+ 0* |
+ 43.0 |
+ 49.8 |
+ 63.2 |
+ 45.5 |
+
+
+ | 4 |
+ 45.4 |
+ 51.3 |
+ 64.5 |
+ 46.5 |
+
+
+ | 8 |
+ 45.6 |
+ 52.2 |
+ 64.7 |
+ 46.6 |
+
+
+ | Emu2 |
+ 37B |
+ 0 |
+ 26.4 |
+ 40.4 |
+ 33.5 |
+ 26.7 |
+
+
+ | 4 |
+ 48.2 |
+ 54.6 |
+ 67.0 |
+ 53.2 |
+
+
+ | 8 |
+ 49.3 |
+ 54.7 |
+ 67.8 |
+ 54.1 |
+
+
+ | MM1 |
+ 30B |
+ 0 |
+ 26.2 |
+ 40.4 |
+ 48.9 |
+ 26.7 |
+
+
+ | 8 |
+ 49.3 |
+ 54.7 |
+ 70.9 |
+ 54.1 |
+
+
+ | MiniCPM-V 2.6+ |
+ 8B |
+ 0 |
+ 43.9 |
+ 33.8 |
+ 45.4 |
+ 23.9 |
+
+
+ | 4 |
+ 63.6 |
+ 60.5 |
+ 65.5 |
+ 50.1 |
+
+
+ | 8 |
+ 64.6 |
+ 63.4 |
+ 68.2 |
+ 51.4 |
+
+
+
+
+
+
+* denotes zero image shot and two additional text shots following Flamingo.
+
++ We evaluate the pretraining ckpt without SFT.
+
+
+### Examples
+
+
+
+ Click to view more cases.
+
+

+

+
+
+
+We deploy MiniCPM-V 2.6 on end devices. The demo video is the raw screen recording on a iPad Pro without edition.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
## MiniCPM-Llama3-V 2.5
+
+Click to view more details of MiniCPM-Llama3-V 2.5
+
**MiniCPM-Llama3-V 2.5** is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.0. Notable features of MiniCPM-Llama3-V 2.5 include:
- 🔥 **Leading Performance.**
@@ -392,20 +1168,8 @@ MiniCPM-Llama3-V 2.5 can be easily used in various ways: (1) [llama.cpp](https:/
-We deploy MiniCPM-Llama3-V 2.5 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro without edition.
+
-
-
-
-
-
-
-
-
-
-
-
-
## MiniCPM-V 2.0
@@ -469,7 +1233,7 @@ We provide online and local demos powered by HuggingFace [Gradio](https://github
### Online Demo
-Click here to try out the online demo of [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2) on HuggingFace Spaces.
+Click here to try out the online demo of [MiniCPM-V 2.6](http://120.92.209.146:8887/) | [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2).
### Local WebUI Demo
@@ -481,10 +1245,8 @@ pip install -r requirements.txt
```shell
# For NVIDIA GPUs, run:
-python web_demo_2.5.py --device cuda
+python web_demo_2.6.py --device cuda
-# For Mac with MPS (Apple silicon or AMD GPUs), run:
-PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
```
@@ -517,9 +1279,12 @@ pip install -r requirements.txt
| Model | Device | Memory | Description | Download |
|:-----------|:--:|:-----------:|:-------------------|:---------------:|
-| MiniCPM-Llama3-V 2.5 | GPU | 19 GB | The lastest version, achieving state-of-the end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
-| MiniCPM-Llama3-V 2.5 gguf | CPU | 5 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) |
-| MiniCPM-Llama3-V 2.5 int4 | GPU | 8 GB | The int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
+| MiniCPM-V 2.6| GPU | 17 GB | The latest version, achieving state-of-the-art end-side performance for single image, multi-image and video understanding. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6) |
+| MiniCPM-V 2.6 gguf | CPU | 6 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6-gguf) |
+| MiniCPM-V 2.6 int4 | GPU | 7 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6-int4) |
+| MiniCPM-Llama3-V 2.5 | GPU | 19 GB | Strong end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
+| MiniCPM-Llama3-V 2.5 gguf | CPU | 6 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) |
+| MiniCPM-Llama3-V 2.5 int4 | GPU | 8 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
| MiniCPM-V 2.0 | GPU | 8 GB | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
| MiniCPM-V 1.0 | GPU | 7 GB | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
@@ -533,30 +1298,40 @@ Please refer to the following codes to run.
```python
-from chat import MiniCPMVChat, img2base64
import torch
-import json
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
torch.manual_seed(0)
-chat_model = MiniCPMVChat('openbmb/MiniCPM-Llama3-V-2_5')
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
-im_64 = img2base64('./assets/airplane.jpeg')
+image = Image.open('./assets/airplane.jpeg').convert('RGB')
# First round chat
-msgs = [{"role": "user", "content": "Tell me the model of this aircraft."}]
+question = "Tell me the model of this aircraft."
+msgs = [{'role': 'user', 'content': [image, question]}]
-inputs = {"image": im_64, "question": json.dumps(msgs)}
-answer = chat_model.chat(inputs)
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
print(answer)
# Second round chat
# pass history context of multi-turn conversation
-msgs.append({"role": "assistant", "content": answer})
-msgs.append({"role": "user", "content": "Introduce something about Airbus A380."})
+msgs.append({"role": "assistant", "content": [answer]})
+msgs.append({"role": "user", "content": ["Introduce something about Airbus A380."]})
-inputs = {"image": im_64, "question": json.dumps(msgs)}
-answer = chat_model.chat(inputs)
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
print(answer)
```
@@ -568,6 +1343,126 @@ You will get the following output:
"The Airbus A380 is a double-deck, wide-body, four-engine jet airliner made by Airbus. It is the world's largest passenger airliner and is known for its long-haul capabilities. The aircraft was developed to improve efficiency and comfort for passengers traveling over long distances. It has two full-length passenger decks, which can accommodate more passengers than a typical single-aisle airplane. The A380 has been operated by airlines such as Lufthansa, Singapore Airlines, and Emirates, among others. It is widely recognized for its unique design and significant impact on the aviation industry."
```
+#### Chat with multiple images
+
+ Click to view Python code running MiniCPM-V 2.6 with multiple images input.
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+image1 = Image.open('image1.jpg').convert('RGB')
+image2 = Image.open('image2.jpg').convert('RGB')
+question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.'
+
+msgs = [{'role': 'user', 'content': [image1, image2, question]}]
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
+print(answer)
+```
+
+
+#### In-context few-shot learning
+
+ Click to view Python code running MiniCPM-V 2.6 with few-shot input.
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+question = "production date"
+image1 = Image.open('example1.jpg').convert('RGB')
+answer1 = "2023.08.04"
+image2 = Image.open('example2.jpg').convert('RGB')
+answer2 = "2007.04.24"
+image_test = Image.open('test.jpg').convert('RGB')
+
+msgs = [
+ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]},
+ {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]},
+ {'role': 'user', 'content': [image_test, question]}
+]
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
+print(answer)
+```
+
+
+#### Chat with video
+
+ Click to view Python code running MiniCPM-V 2.6 with video input.
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+from decord import VideoReader, cpu # pip install decord
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+MAX_NUM_FRAMES=64
+
+def encode_video(video_path):
+ def uniform_sample(l, n):
+ gap = len(l) / n
+ idxs = [int(i * gap + gap / 2) for i in range(n)]
+ return [l[i] for i in idxs]
+
+ vr = VideoReader(video_path, ctx=cpu(0))
+ sample_fps = round(vr.get_avg_fps() / 1) # FPS
+ frame_idx = [i for i in range(0, len(vr), sample_fps)]
+ if len(frame_idx) > MAX_NUM_FRAMES:
+ frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)
+ frames = vr.get_batch(frame_idx).asnumpy()
+ frames = [Image.fromarray(v.astype('uint8')) for v in frames]
+ print('num frames:', len(frames))
+ return frames
+
+video_path="video_test.mp4"
+frames = encode_video(video_path)
+question = "Describe the video"
+msgs = [
+ {'role': 'user', 'content': frames + [question]},
+]
+
+# Set decode params for video
+params["use_image_id"] = False
+params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer,
+ **params
+)
+print(answer)
+```
+
+
+
### Inference on Multiple GPUs
You can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs (12 GB or 16 GB) by distributing the model's layers across multiple GPUs. Please refer to this [tutorial](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md) for detailed instructions on how to load the model and inference using multiple low VRAM GPUs.
@@ -610,13 +1505,16 @@ PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
### Deployment on Mobile Phone
MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.0 can be deployed on mobile phones with Android operating systems. 🚀 Click [MiniCPM-Llama3-V 2.5](http://minicpm.modelbest.cn/android/modelbest-release-20240528_182155.apk) / [MiniCPM-V 2.0](https://github.com/OpenBMB/mlc-MiniCPM) to install apk.
-### Inference with llama.cpp
-MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail. This implementation supports smooth inference of 6~8 token/s on mobile phones (test environment:Xiaomi 14 pro + Snapdragon 8 Gen 3).
+### Inference with llama.cpp
+MiniCPM-V 2.6 can run with llama.cpp now! See [our fork of llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpmv-main/examples/llava/README-minicpmv2.6.md) for more detail. This implementation supports smooth inference of 16~18 token/s on iPad (test environment:iPad Pro + M4).
-### Inference with vLLM
+### Inference with ollama
+MiniCPM-V 2.6 can run with ollama now! See [our fork of ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) for more detail. This implementation supports smooth inference of 16~18 token/s on iPad (test environment:iPad Pro + M4).
+
+### Inference with vLLM
- vLLM now officially supports MiniCPM-V 2.0 and MiniCPM-Llama3-V 2.5, Click to see.
+ vLLM now officially supports MiniCPM-V 2.0, MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.6, Click to see.
1. Clone the official vLLM:
```shell
@@ -627,11 +1525,11 @@ git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e .
```
-3. Install timm:
+3. Install timm: (optional, MiniCPM-V 2.0 need timm)
```shell
pip install timm==0.9.10
```
-4. Run the example:(If you use model in local path, please update the model code to the latest version on Hugging Face.)
+4. Run the example:(Attention: If you use model in local path, please update the model code to the latest version on Hugging Face.)
```shell
python examples/minicpmv_example.py
```
@@ -651,12 +1549,8 @@ We now support MiniCPM-V series fine-tuning with the SWIFT framework. SWIFT supp
Best Practices:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md)
-
-
-## TODO
-
-- [x] MiniCPM-V fine-tuning support
-- [ ] Code release for real-time interactive assistant
+## FAQs
+Click here to view the [FAQs](./docs/faqs.md)
## Model License
@@ -671,7 +1565,7 @@ Best Practices:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/d
As LMMs, MiniCPM-V models (including OmniLMM) generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V models does not represent the views and positions of the model developers
-We will not be liable for any problems arising from the use of MiniCPMV-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
+We will not be liable for any problems arising from the use of MiniCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
## Institutions
@@ -688,7 +1582,7 @@ This project is developed by the following institutions:
[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
-## 🌟 Star History
+## 🌟 Star History
@@ -716,7 +1610,7 @@ This project is developed by the following institutions:
/>
-->
-## Citation
+## Citation
If you find our model/code/paper helpful, please consider cite our papers 📝 and star us ⭐️!
diff --git a/README_en.md b/README_en.md
index 5688603..7728c2c 100644
--- a/README_en.md
+++ b/README_en.md
@@ -2,7 +2,7 @@
-**A GPT-4V Level Multimodal LLM on Your Phone**
+**A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone**
[中文](./README_zh.md) |
English
@@ -11,17 +11,16 @@ Join our 💬 WeChat
- MiniCPM-Llama3-V 2.5 🤗 🤖 |
- MiniCPM-V 2.0 🤗 🤖 |
+ MiniCPM-V 2.6 🤗 🤖 | MiniCPM-Llama3-V 2.5 🤗 🤖 |
MiniCPM-Llama3-V 2.5 Technical Report
-**MiniCPM-V** is a series of end-side multimodal LLMs (MLLMs) designed for vision-language understanding. The models take image and text as inputs and provide high-quality text outputs. Since February 2024, we have released 4 versions of the model, aiming to achieve **strong performance and efficient deployment**. The most notable models in this series currently include:
+**MiniCPM-V** is a series of end-side multimodal LLMs (MLLMs) designed for vision-language understanding. The models take image, video and text as inputs and provide high-quality text outputs. Since February 2024, we have released 5 versions of the model, aiming to achieve **strong performance and efficient deployment**. The most notable models in this series currently include:
-- **MiniCPM-Llama3-V 2.5**: 🔥🔥🔥 The latest and most capable model in the MiniCPM-V series. With a total of 8B parameters, the model **surpasses proprietary models such as GPT-4V-1106, Gemini Pro, Qwen-VL-Max and Claude 3** in overall performance. Equipped with the enhanced OCR and instruction-following capability, the model can also support multimodal conversation for **over 30 languages** including English, Chinese, French, Spanish, German etc. With help of quantization, compilation optimizations, and several efficient inference techniques on CPUs and NPUs, MiniCPM-Llama3-V 2.5 can be **efficiently deployed on end-side devices**.
+- **MiniCPM-V 2.6**: 🔥🔥🔥 The latest and most capable model in the MiniCPM-V series. With a total of 8B parameters, the model **surpasses GPT-4V in single image, multi-image and video understanding**. It outperforms **GPT-4o mini, Gemini 1.5 Pro and Claude 3.5 Sonnet** in single image understanding, and advances MiniCPM-Llama3-V 2.5's features such as strong OCR capability, trustworthy behavior, multilingual support, and end-side deployment. Due to its superior token density, MiniCPM-V 2.6 can for the first time support real-time video understanding on end-side devices such as iPad.
- **MiniCPM-V 2.0**: The lightest model in the MiniCPM-V series. With 2B parameters, it surpasses larger models such as Yi-VL 34B, CogVLM-Chat 17B, and Qwen-VL-Chat 10B in overall performance. It can accept image inputs of any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), achieving comparable performance with Gemini Pro in understanding scene-text and matches GPT-4V in low hallucination rates.
@@ -29,6 +28,7 @@ Join our 💬 WeChat
## News
#### 📌 Pinned
+* [2024.08.06] 🔥🔥🔥 We open-source MiniCPM-V 2.6, which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now!
* [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See [here](./docs/MiniCPM_Llama3_V_25_technical_report.pdf).
* [2024.07.19] MiniCPM-Llama3-V 2.5 supports vLLM now! See [here](#vllm).
* [2024.05.28] 🚀🚀🚀 MiniCPM-Llama3-V 2.5 now fully supports its feature in llama.cpp and ollama! Please pull the latest code **of our provided forks** ([llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md), [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5)). GGUF models in various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/tree/main). MiniCPM-Llama3-V 2.5 series is **not supported by the official repositories yet**, and we are working hard to merge PRs. Please stay tuned!
@@ -38,6 +38,9 @@ Join our 💬 WeChat
+
+Click to view more news.
+
* [2024.06.03] Now, you can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs(12 GB or 16 GB) by distributing the model's layers across multiple GPUs. For more details, Check this [link](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md).
* [2024.05.25] MiniCPM-Llama3-V 2.5 now supports streaming outputs and customized system prompts. Try it [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage)!
* [2024.05.24] We release the MiniCPM-Llama3-V 2.5 [gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](#inference-with-llamacpp) inference and provides a 6~8 token/s smooth decoding on mobile phones. Try it now!
@@ -50,11 +53,13 @@ Join our 💬 WeChat
* [2024.03.14] MiniCPM-V now supports [fine-tuning](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md) with the SWIFT framework. Thanks to [Jintao](https://github.com/Jintao-Huang) for the contribution!
* [2024.03.01] MiniCPM-V now can be deployed on Mac!
* [2024.02.01] We open-source MiniCPM-V and OmniLMM-12B, which support efficient end-side deployment and powerful multimodal capabilities correspondingly.
+
## Contents
+- [MiniCPM-V 2.6](#minicpm-v-26)
- [MiniCPM-Llama3-V 2.5](#minicpm-llama3-v-25)
- [MiniCPM-V 2.0](#minicpm-v-20)
- [Chat with Our Demo on Gradio](#chat-with-our-demo-on-gradio)
@@ -62,18 +67,789 @@ Join our 💬 WeChat
- [Inference](#inference)
- [Model Zoo](#model-zoo)
- [Multi-turn Conversation](#multi-turn-conversation)
+ - [Chat with multiple images](#chat-with-multiple-images)
+ - [In-context few-shot learning](#in-context-few-shot-learning)
+ - [Chat with video](#chat-with-video)
- [Inference on Multiple GPUs](#inference-on-multiple-gpus)
- [Inference on Mac](#inference-on-mac)
- [Deployment on Mobile Phone](#deployment-on-mobile-phone)
- [Inference with llama.cpp](#inference-with-llamacpp)
+ - [Inference with ollama](#inference-with-ollama)
- [Inference with vLLM](#inference-with-vllm)
- [Fine-tuning](#fine-tuning)
-- [TODO](#todo)
-- [🌟 Star History](#-star-history)
-- [Citation](#citation)
+- [FAQs](#faqs)
+
+
+## MiniCPM-V 2.6
+
+**MiniCPM-V 2.6** is the latest and most capable model in the MiniCPM-V series. The model is built on SigLip-400M and Qwen2-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-Llama3-V 2.5, and introduces new features for multi-image and video understanding. Notable features of MiniCPM-V 2.6 include:
+
+- 🔥 **Leading Performance.**
+ MiniCPM-V 2.6 achieves an average score of 65.2 on the latest version of OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o mini, GPT-4V, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding.
+
+- 🖼️ **Multi Image Understanding and In-context Learning.** MiniCPM-V 2.6 can also perform **conversation and reasoning over multiple images**. It achieves **state-of-the-art performance** on popular multi-image benchmarks such as Mantis-Eval, BLINK, Mathverse mv and Sciverse mv, and also shows promising in-context learning capability.
+
+- 🎬 **Video Understanding.** MiniCPM-V 2.6 can also **accept video inputs**, performing conversation and providing dense captions for spatial-temporal information. It outperforms **GPT-4V, Claude 3.5 Sonnet and LLaVA-NeXT-Video-34B** on Video-MME with/without subtitles.
+
+- 💪 **Strong OCR Capability and Others.**
+ MiniCPM-V 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V, and Gemini 1.5 Pro**.
+ Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, with significantly lower hallucination rates than GPT-4o and GPT-4V on Object HalBench, and supports **multilingual capabilities** on English, Chiense, German, French, Italian, Korean, etc.
+
+
+- 🚀 **Superior Efficiency.**
+ In addition to its friendly size, MiniCPM-V 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-V 2.6 can efficiently support **real-time video understanding** on end-side devices such as iPad.
+
+- 💫 **Easy Usage.**
+MiniCPM-V 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpmv-main/examples/llava/README-minicpmv2.6.md) and [ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#inference-with-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks, (5) quick local WebUI demo setup with [Gradio](#chat-with-our-demo-on-gradio), and (6) online web [demo](http://120.92.209.146:8887/).
+
+### Evaluation
+
+

+
+
+
+Click to view single image results on OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench.
+
+
+
+
+
+ | Model |
+ Size |
+ Token Density+ |
+ OpenCompass |
+ MME |
+ MMVet |
+ OCRBench |
+ MMMU val |
+ MathVista mini |
+ MMB1.1 test |
+ AI2D |
+ TextVQA val |
+ DocVQA test |
+ HallusionBench |
+ Object HalBench |
+
+
+
+
+ | Proprietary |
+
+
+ | GPT-4o |
+ - |
+ 1088 |
+ 69.9 |
+ 2328.7 |
+ 69.1 |
+ 736 |
+ 69.2 |
+ 61.3 |
+ 82.2 |
+ 84.6 |
+ - |
+ 92.8 |
+ 55.0 |
+ 17.6 |
+
+
+ | Claude 3.5 Sonnet |
+ - |
+ 750 |
+ 67.9 |
+ 1920.0 |
+ 66.0 |
+ 788 |
+ 65.9 |
+ 61.6 |
+ 78.5 |
+ 80.2 |
+ - |
+ 95.2 |
+ 49.9 |
+ 13.8 |
+
+
+ | Gemini 1.5 Pro |
+ - |
+ - |
+ 64.4 |
+ 2110.6 |
+ 64.0 |
+ 754 |
+ 60.6 |
+ 57.7 |
+ 73.9 |
+ 79.1 |
+ 73.5 |
+ 86.5 |
+ 45.6 |
+ - |
+
+
+ | GPT-4o mini |
+ - |
+ 1088 |
+ 64.1 |
+ 2003.4 |
+ 66.9 |
+ 785 |
+ 60.0 |
+ 52.4 |
+ 76.0 |
+ 77.8 |
+ - |
+ - |
+ 46.1 |
+ 12.4 |
+
+
+ | GPT-4V |
+ - |
+ 1088 |
+ 63.5 |
+ 2070.2 |
+ 67.5 |
+ 656 |
+ 61.7 |
+ 54.7 |
+ 79.8 |
+ 78.6 |
+ 78.0 |
+ 87.2 |
+ 43.9 |
+ 14.2 |
+
+
+ | Step-1V |
+ - |
+ - |
+ 59.5 |
+ 2206.4 |
+ 63.3 |
+ 625 |
+ 49.9 |
+ 44.8 |
+ 78.0 |
+ 79.2 |
+ 71.6 |
+ - |
+ 48.4 |
+ - |
+
+
+ | Qwen-VL-Max |
+ - |
+ 784 |
+ 58.3 |
+ 2281.7 |
+ 61.8 |
+ 684 |
+ 52.0 |
+ 43.4 |
+ 74.6 |
+ 75.7 |
+ 79.5 |
+ 93.1 |
+ 41.2 |
+ 13.4 |
+
+
+ | Open-source |
+
+
+ | LLaVA-NeXT-Yi-34B |
+ 34B |
+ 157 |
+ 55.0 |
+ 2006.5 |
+ 50.7 |
+ 574 |
+ 48.8 |
+ 40.4 |
+ 77.8 |
+ 78.9 |
+ 69.3 |
+ - |
+ 34.8 |
+ 12.6 |
+
+
+ | Mini-Gemini-HD-34B |
+ 34B |
+ 157 |
+ - |
+ 2141 |
+ 59.3 |
+ 518 |
+ 48.0 |
+ 43.3 |
+ - |
+ 80.5 |
+ 74.1 |
+ 78.9 |
+ - |
+ - |
+
+
+ | Cambrian-34B |
+ 34B |
+ 1820 |
+ 58.3 |
+ 2049.9 |
+ 53.2 |
+ 591 |
+ 50.4 |
+ 50.3 |
+ 77.8 |
+ 79.5 |
+ 76.7 |
+ 75.5 |
+ 41.6 |
+ 14.7 |
+
+
+ | GLM-4V-9B |
+ 13B |
+ 784 |
+ 59.1 |
+ 2018.8 |
+ 58.0 |
+ 776 |
+ 46.9 |
+ 51.1 |
+ 67.9 |
+ 71.2 |
+ - |
+ - |
+ 45.0 |
+ - |
+
+
+ | InternVL2-8B |
+ 8B |
+ 706 |
+ 64.1 |
+ 2215.1 |
+ 54.3 |
+ 794 |
+ 51.2 |
+ 58.3 |
+ 79.4 |
+ 83.6 |
+ 77.4 |
+ 91.6 |
+ 45.0 |
+ 21.3 |
+
+
+ | MiniCPM-Llama-V 2.5 |
+ 8B |
+ 1882 |
+ 58.8 |
+ 2024.6 |
+ 52.8 |
+ 725 |
+ 45.8 |
+ 54.3 |
+ 72.0 |
+ 78.4 |
+ 76.6 |
+ 84.8 |
+ 42.4 |
+ 10.3 |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 2822 |
+ 65.2 |
+ 2348.4* |
+ 60.0 |
+ 852* |
+ 49.8* |
+ 60.6 |
+ 78.0 |
+ 82.1 |
+ 80.1 |
+ 90.8 |
+ 48.1* |
+ 8.2 |
+
+
+
+
+
+* We evaluate this benchmark using chain-of-thought prompting.
+
++ Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens.
+
+Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation.
+
+
+
+
+
+Click to view multi-image results on Mantis Eval, BLINK, Mathverse mv, Sciverse mv, MIRB.
+
+
+
+
+
+ | Model |
+ Size |
+ Mantis Eval |
+ BLINK val |
+ Mathverse mv |
+ Sciverse mv |
+ MIRB |
+
+
+
+
+ | Proprietary |
+
+
+ | GPT-4V |
+ - |
+ 62.7 |
+ 54.6 |
+ 60.3 |
+ 66.9 |
+ 53.1 |
+
+
+ | LLaVA-NeXT-Interleave-14B |
+ 14B |
+ 66.4 |
+ 52.6 |
+ 32.7 |
+ 30.2 |
+ - |
+
+
+ | Open-source |
+
+
+ | Emu2-Chat |
+ 37B |
+ 37.8 |
+ 36.2 |
+ - |
+ 27.2 |
+ - |
+
+
+ | CogVLM |
+ 17B |
+ 45.2 |
+ 41.1 |
+ - |
+ - |
+ - |
+
+
+ | VPG-C |
+ 7B |
+ 52.4 |
+ 43.1 |
+ 24.3 |
+ 23.1 |
+ - |
+
+
+ | VILA 8B |
+ 8B |
+ 51.2 |
+ 39.3 |
+ - |
+ 36.5 |
+ - |
+
+
+ | InternLM-XComposer-2.5 |
+ 8B |
+ 53.1* |
+ 48.9 |
+ 32.1* |
+ - |
+ 42.5 |
+
+
+ | InternVL2-8B |
+ 8B |
+ 59.0* |
+ 50.9 |
+ 30.5* |
+ 34.4* |
+ 56.9* |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 69.1 |
+ 53.0 |
+ 84.9 |
+ 74.9 |
+ 53.8 |
+
+
+
+
+
+* We evaluate the officially released checkpoint by ourselves.
+
+
+
+Click to view video results on Video-MME and Video-ChatGPT.
+
+
+
+
+ | Model |
+ Size |
+ Video-MME |
+ Video-ChatGPT |
+
+
+ |
+ |
+ w/o subs |
+ w subs |
+ Correctness |
+ Detail |
+ Context |
+ Temporal |
+ Consistency |
+
+
+
+
+ | Proprietary |
+
+
+ | Claude 3.5 Sonnet |
+ - |
+ 60.0 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | GPT-4V |
+ - |
+ 59.9 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | Open-source |
+
+
+ | LLaVA-NeXT-7B |
+ 7B |
+ - |
+ - |
+ 3.39 |
+ 3.29 |
+ 3.92 |
+ 2.60 |
+ 3.12 |
+
+
+ | LLaVA-NeXT-34B |
+ 34B |
+ - |
+ - |
+ 3.29 |
+ 3.23 |
+ 3.83 |
+ 2.51 |
+ 3.47 |
+
+
+ | CogVLM2-Video |
+ 12B |
+ - |
+ - |
+ 3.49 |
+ 3.46 |
+ 3.23 |
+ 2.98 |
+ 3.64 |
+
+
+ | LongVA |
+ 7B |
+ 52.4 |
+ 54.3 |
+ 3.05 |
+ 3.09 |
+ 3.77 |
+ 2.44 |
+ 3.64 |
+
+
+ | InternVL2-8B |
+ 8B |
+ 54.0 |
+ 56.9 |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | InternLM-XComposer-2.5 |
+ 8B |
+ 55.8 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | LLaVA-NeXT-Video |
+ 32B |
+ 60.2 |
+ 63.0 |
+ 3.48 |
+ 3.37 |
+ 3.95 |
+ 2.64 |
+ 3.28 |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 60.9 |
+ 63.6 |
+ 3.59 |
+ 3.28 |
+ 3.93 |
+ 2.73 |
+ 3.62 |
+
+
+
+
+
+
+
+
+Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA.
+
+
+
+
+ | Model |
+ Size |
+ Shot |
+ TextVQA val |
+ VizWiz test-dev |
+ VQAv2 test-dev |
+ OK-VQA val |
+
+
+
+
+ | Flamingo |
+ 80B |
+ 0* |
+ 35.0 |
+ 31.6 |
+ 56.3 |
+ 40.6 |
+
+
+ | 4 |
+ 36.5 |
+ 39.6 |
+ 63.1 |
+ 57.4 |
+
+
+ | 8 |
+ 37.3 |
+ 44.8 |
+ 65.6 |
+ 57.5 |
+
+
+ | IDEFICS |
+ 80B |
+ 0* |
+ 30.9 |
+ 36.0 |
+ 60.0 |
+ 45.2 |
+
+
+ | 4 |
+ 34.3 |
+ 40.4 |
+ 63.6 |
+ 52.4 |
+
+
+ | 8 |
+ 35.7 |
+ 46.1 |
+ 64.8 |
+ 55.1 |
+
+
+ | OmniCorpus |
+ 7B |
+ 0* |
+ 43.0 |
+ 49.8 |
+ 63.2 |
+ 45.5 |
+
+
+ | 4 |
+ 45.4 |
+ 51.3 |
+ 64.5 |
+ 46.5 |
+
+
+ | 8 |
+ 45.6 |
+ 52.2 |
+ 64.7 |
+ 46.6 |
+
+
+ | Emu2 |
+ 37B |
+ 0 |
+ 26.4 |
+ 40.4 |
+ 33.5 |
+ 26.7 |
+
+
+ | 4 |
+ 48.2 |
+ 54.6 |
+ 67.0 |
+ 53.2 |
+
+
+ | 8 |
+ 49.3 |
+ 54.7 |
+ 67.8 |
+ 54.1 |
+
+
+ | MM1 |
+ 30B |
+ 0 |
+ 26.2 |
+ 40.4 |
+ 48.9 |
+ 26.7 |
+
+
+ | 8 |
+ 49.3 |
+ 54.7 |
+ 70.9 |
+ 54.1 |
+
+
+ | MiniCPM-V 2.6+ |
+ 8B |
+ 0 |
+ 43.9 |
+ 33.8 |
+ 45.4 |
+ 23.9 |
+
+
+ | 4 |
+ 63.6 |
+ 60.5 |
+ 65.5 |
+ 50.1 |
+
+
+ | 8 |
+ 64.6 |
+ 63.4 |
+ 68.2 |
+ 51.4 |
+
+
+
+
+
+
+* denotes zero image shot and two additional text shots following Flamingo.
+
++ We evaluate the pretraining ckpt without SFT.
+
+
+### Examples
+
+
+
+ Click to view more cases.
+
+

+

+
+
+
+We deploy MiniCPM-V 2.6 on end devices. The demo video is the raw screen recording on a iPad Pro without edition.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
## MiniCPM-Llama3-V 2.5
+
+Click to view more details of MiniCPM-Llama3-V 2.5
+
**MiniCPM-Llama3-V 2.5** is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.0. Notable features of MiniCPM-Llama3-V 2.5 include:
- 🔥 **Leading Performance.**
@@ -392,20 +1168,8 @@ MiniCPM-Llama3-V 2.5 can be easily used in various ways: (1) [llama.cpp](https:/
-We deploy MiniCPM-Llama3-V 2.5 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro without edition.
+
-
-
-
-
-
-
-
-
-
-
-
-
## MiniCPM-V 2.0
@@ -469,7 +1233,7 @@ We provide online and local demos powered by HuggingFace [Gradio](https://github
### Online Demo
-Click here to try out the online demo of [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2) on HuggingFace Spaces.
+Click here to try out the online demo of [MiniCPM-V 2.6](http://120.92.209.146:8887/) | [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2).
### Local WebUI Demo
@@ -481,10 +1245,8 @@ pip install -r requirements.txt
```shell
# For NVIDIA GPUs, run:
-python web_demo_2.5.py --device cuda
+python web_demo_2.6.py --device cuda
-# For Mac with MPS (Apple silicon or AMD GPUs), run:
-PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
```
@@ -517,9 +1279,12 @@ pip install -r requirements.txt
| Model | Device | Memory | Description | Download |
|:-----------|:--:|:-----------:|:-------------------|:---------------:|
-| MiniCPM-Llama3-V 2.5 | GPU | 19 GB | The lastest version, achieving state-of-the end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
-| MiniCPM-Llama3-V 2.5 gguf | CPU | 5 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) |
-| MiniCPM-Llama3-V 2.5 int4 | GPU | 8 GB | The int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
+| MiniCPM-V 2.6| GPU | 17 GB | The latest version, achieving state-of-the-art end-side performance for single image, multi-image and video understanding. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6) |
+| MiniCPM-V 2.6 gguf | CPU | 6 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6-gguf) |
+| MiniCPM-V 2.6 int4 | GPU | 7 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6-int4) |
+| MiniCPM-Llama3-V 2.5 | GPU | 19 GB | Strong end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
+| MiniCPM-Llama3-V 2.5 gguf | CPU | 6 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) |
+| MiniCPM-Llama3-V 2.5 int4 | GPU | 8 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
| MiniCPM-V 2.0 | GPU | 8 GB | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
| MiniCPM-V 1.0 | GPU | 7 GB | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
@@ -533,30 +1298,40 @@ Please refer to the following codes to run.
```python
-from chat import MiniCPMVChat, img2base64
import torch
-import json
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
torch.manual_seed(0)
-chat_model = MiniCPMVChat('openbmb/MiniCPM-Llama3-V-2_5')
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
-im_64 = img2base64('./assets/airplane.jpeg')
+image = Image.open('./assets/airplane.jpeg').convert('RGB')
# First round chat
-msgs = [{"role": "user", "content": "Tell me the model of this aircraft."}]
+question = "Tell me the model of this aircraft."
+msgs = [{'role': 'user', 'content': [image, question]}]
-inputs = {"image": im_64, "question": json.dumps(msgs)}
-answer = chat_model.chat(inputs)
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
print(answer)
# Second round chat
# pass history context of multi-turn conversation
-msgs.append({"role": "assistant", "content": answer})
-msgs.append({"role": "user", "content": "Introduce something about Airbus A380."})
+msgs.append({"role": "assistant", "content": [answer]})
+msgs.append({"role": "user", "content": ["Introduce something about Airbus A380."]})
-inputs = {"image": im_64, "question": json.dumps(msgs)}
-answer = chat_model.chat(inputs)
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
print(answer)
```
@@ -568,6 +1343,126 @@ You will get the following output:
"The Airbus A380 is a double-deck, wide-body, four-engine jet airliner made by Airbus. It is the world's largest passenger airliner and is known for its long-haul capabilities. The aircraft was developed to improve efficiency and comfort for passengers traveling over long distances. It has two full-length passenger decks, which can accommodate more passengers than a typical single-aisle airplane. The A380 has been operated by airlines such as Lufthansa, Singapore Airlines, and Emirates, among others. It is widely recognized for its unique design and significant impact on the aviation industry."
```
+#### Chat with multiple images
+
+ Click to view Python code running MiniCPM-V 2.6 with multiple images input.
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+image1 = Image.open('image1.jpg').convert('RGB')
+image2 = Image.open('image2.jpg').convert('RGB')
+question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.'
+
+msgs = [{'role': 'user', 'content': [image1, image2, question]}]
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
+print(answer)
+```
+
+
+#### In-context few-shot learning
+
+ Click to view Python code running MiniCPM-V 2.6 with few-shot input.
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+question = "production date"
+image1 = Image.open('example1.jpg').convert('RGB')
+answer1 = "2023.08.04"
+image2 = Image.open('example2.jpg').convert('RGB')
+answer2 = "2007.04.24"
+image_test = Image.open('test.jpg').convert('RGB')
+
+msgs = [
+ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]},
+ {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]},
+ {'role': 'user', 'content': [image_test, question]}
+]
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
+print(answer)
+```
+
+
+#### Chat with video
+
+ Click to view Python code running MiniCPM-V 2.6 with video input.
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+from decord import VideoReader, cpu # pip install decord
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+MAX_NUM_FRAMES=64
+
+def encode_video(video_path):
+ def uniform_sample(l, n):
+ gap = len(l) / n
+ idxs = [int(i * gap + gap / 2) for i in range(n)]
+ return [l[i] for i in idxs]
+
+ vr = VideoReader(video_path, ctx=cpu(0))
+ sample_fps = round(vr.get_avg_fps() / 1) # FPS
+ frame_idx = [i for i in range(0, len(vr), sample_fps)]
+ if len(frame_idx) > MAX_NUM_FRAMES:
+ frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)
+ frames = vr.get_batch(frame_idx).asnumpy()
+ frames = [Image.fromarray(v.astype('uint8')) for v in frames]
+ print('num frames:', len(frames))
+ return frames
+
+video_path="video_test.mp4"
+frames = encode_video(video_path)
+question = "Describe the video"
+msgs = [
+ {'role': 'user', 'content': frames + [question]},
+]
+
+# Set decode params for video
+params["use_image_id"] = False
+params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer,
+ **params
+)
+print(answer)
+```
+
+
+
### Inference on Multiple GPUs
You can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs (12 GB or 16 GB) by distributing the model's layers across multiple GPUs. Please refer to this [tutorial](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md) for detailed instructions on how to load the model and inference using multiple low VRAM GPUs.
@@ -610,13 +1505,16 @@ PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
### Deployment on Mobile Phone
MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.0 can be deployed on mobile phones with Android operating systems. 🚀 Click [MiniCPM-Llama3-V 2.5](http://minicpm.modelbest.cn/android/modelbest-release-20240528_182155.apk) / [MiniCPM-V 2.0](https://github.com/OpenBMB/mlc-MiniCPM) to install apk.
-### Inference with llama.cpp
-MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail. This implementation supports smooth inference of 6~8 token/s on mobile phones (test environment:Xiaomi 14 pro + Snapdragon 8 Gen 3).
+### Inference with llama.cpp
+MiniCPM-V 2.6 can run with llama.cpp now! See [our fork of llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpmv-main/examples/llava/README-minicpmv2.6.md) for more detail. This implementation supports smooth inference of 16~18 token/s on iPad (test environment:iPad Pro + M4).
-### Inference with vLLM
+### Inference with ollama
+MiniCPM-V 2.6 can run with ollama now! See [our fork of ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) for more detail. This implementation supports smooth inference of 16~18 token/s on iPad (test environment:iPad Pro + M4).
+
+### Inference with vLLM
- vLLM now officially supports MiniCPM-V 2.0 and MiniCPM-Llama3-V 2.5, Click to see.
+ vLLM now officially supports MiniCPM-V 2.0, MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.6, Click to see.
1. Clone the official vLLM:
```shell
@@ -627,11 +1525,11 @@ git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e .
```
-3. Install timm:
+3. Install timm: (optional, MiniCPM-V 2.0 need timm)
```shell
pip install timm==0.9.10
```
-4. Run the example:(If you use model in local path, please update the model code to the latest version on Hugging Face.)
+4. Run the example:(Attention: If you use model in local path, please update the model code to the latest version on Hugging Face.)
```shell
python examples/minicpmv_example.py
```
@@ -651,12 +1549,8 @@ We now support MiniCPM-V series fine-tuning with the SWIFT framework. SWIFT supp
Best Practices:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md)
-
-
-## TODO
-
-- [x] MiniCPM-V fine-tuning support
-- [ ] Code release for real-time interactive assistant
+## FAQs
+Click here to view the [FAQs](./docs/faqs.md)
## Model License
@@ -671,7 +1565,7 @@ Best Practices:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/d
As LMMs, MiniCPM-V models (including OmniLMM) generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V models does not represent the views and positions of the model developers
-We will not be liable for any problems arising from the use of MiniCPMV-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
+We will not be liable for any problems arising from the use of MiniCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
## Institutions
@@ -688,7 +1582,7 @@ This project is developed by the following institutions:
[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
-## 🌟 Star History
+## 🌟 Star History
@@ -716,7 +1610,7 @@ This project is developed by the following institutions:
/>
-->
-## Citation
+## Citation
If you find our model/code/paper helpful, please consider cite our papers 📝 and star us ⭐️!
diff --git a/README_zh.md b/README_zh.md
index b777925..843698b 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -4,7 +4,7 @@
-**端侧可用的 GPT-4V 级多模态大模型**
+**端侧可用的 GPT-4V 级单图、多图、视频多模态大模型**
中文 |
[English](./README_en.md)
@@ -12,18 +12,18 @@
加入我们的 💬 微信社区
- MiniCPM-Llama3-V 2.5 🤗 🤖 |
- MiniCPM-V 2.0 🤗 🤖 |
+ MiniCPM-V 2.6 🤗 🤖 | MiniCPM-Llama3-V 2.5 🤗 🤖 |
MiniCPM-Llama3-V 2.5 技术报告
-
+
-**MiniCPM-V**是面向图文理解的端侧多模态大模型系列。该系列模型接受图像和文本输入,并提供高质量的文本输出。自2024年2月以来,我们共发布了4个版本模型,旨在实现**领先的性能和高效的部署**,目前该系列最值得关注的模型包括:
+**MiniCPM-V**是面向图文理解的端侧多模态大模型系列。该系列模型接受图像和文本输入,并提供高质量的文本输出。自2024年2月以来,我们共发布了5个版本模型,旨在实现**领先的性能和高效的部署**,目前该系列最值得关注的模型包括:
-- **MiniCPM-Llama3-V 2.5**:🔥🔥🔥 MiniCPM-V系列的最新、性能最佳模型。总参数量8B,多模态综合性能**超越 GPT-4V-1106、Gemini Pro、Claude 3、Qwen-VL-Max 等商用闭源模型**,OCR 能力及指令跟随能力进一步提升,并**支持超过30种语言**的多模态交互。通过系统使用模型量化、CPU、NPU、编译优化等高效推理技术,MiniCPM-Llama3-V 2.5 可以实现**高效的终端设备部署**。
+
+- **MiniCPM-V 2.6**: 🔥🔥🔥 MiniCPM-V系列的最新、性能最佳模型。总参数量 8B,单图、多图和视频理解性能**超越了 GPT-4V**。在单图理解上,它取得了优于 **GPT-4o mini、Gemini 1.5 Pro 和 Claude 3.5 Sonnet**等商用闭源模型的表现,并进一步优化了 MiniCPM-Llama3-V 2.5 的 OCR、可信行为、多语言支持以及端侧部署等诸多特性。基于其领先的视觉 token 密度,MiniCPM-V 2.6 成为了首个支持在 iPad 等端侧设备上进行实时视频理解的多模态大模型。
- **MiniCPM-V 2.0**:MiniCPM-V系列的最轻量级模型。总参数量2B,多模态综合性能超越 Yi-VL 34B、CogVLM-Chat 17B、Qwen-VL-Chat 10B 等更大参数规模的模型,可接受 180 万像素的任意长宽比图像输入,实现了和 Gemini Pro 相近的场景文字识别能力以及和 GPT-4V 相匹的低幻觉率。
@@ -33,6 +33,7 @@
#### 📌 置顶
+* [2024.08.06] 🔥🔥🔥 我们开源了 MiniCPM-V 2.6,该模型在单图、多图和视频理解方面取得了优于 GPT-4V 的表现。我们还进一步提升了 MiniCPM-Llama3-V 2.5 的多项亮点能力,并首次支持了 iPad 上的实时视频理解。欢迎试用!
* [2024.08.03] MiniCPM-Llama3-V 2.5 技术报告已发布!欢迎点击[这里](./docs/MiniCPM_Llama3_V_25_technical_report.pdf)查看。
* [2024.07.19] MiniCPM-Llama3-V 2.5 现已支持[vLLM](#vllm) !
* [2024.05.28] 💥 MiniCPM-Llama3-V 2.5 现在在 llama.cpp 和 ollama 中完全支持其功能!**请拉取我们最新的 fork 来使用**:[llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) & [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5)。我们还发布了各种大小的 GGUF 版本,请点击[这里](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/tree/main)查看。请注意,**目前官方仓库尚未支持 MiniCPM-Llama3-V 2.5**,我们也正积极推进将这些功能合并到 llama.cpp & ollama 官方仓库,敬请关注!
@@ -43,6 +44,8 @@
+
+点击查看完整更新日志。
* [2024.06.03] 现在,你可以利用多张低显存显卡(12G/16G)进行GPU串行推理。详情请参见该[文档](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md)配置。
* [2024.05.25] MiniCPM-Llama3-V 2.5 [支持流式输出和自定义系统提示词](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage)了,欢迎试用!
@@ -56,10 +59,12 @@
* [2024.03.14] MiniCPM-V 现在支持 SWIFT 框架下的[微调](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md)了,感谢 [Jintao](https://github.com/Jintao-Huang) 的贡献!
* [2024.03.01] MiniCPM-V 现在支持在 Mac 电脑上进行部署!
* [2024.02.01] 我们开源了 MiniCPM-V 和 OmniLMM-12B,分别可以支持高效的端侧部署和同规模领先的多模态能力!
+
## 目录
+- [MiniCPM-V 2.6](#minicpm-v-26)
- [MiniCPM-Llama3-V 2.5](#minicpm-llama3-v-25)
- [MiniCPM-V 2.0](#minicpm-v-20)
- [Demo](#demo)
@@ -67,19 +72,792 @@
- [推理](#推理)
- [模型库](#模型库)
- [多轮对话](#多轮对话)
+ - [多图理解](#多图理解)
+ - [少样本上下文学习](#少样本上下文学习)
+ - [视频理解](#视频理解)
- [多卡推理](#多卡推理)
- [Mac 推理](#mac-推理)
- [手机端部署](#手机端部署)
- [本地WebUI Demo部署](#本地webui-demo部署)
- [llama.cpp 部署](#llamacpp-部署)
+ - [ollama 部署](#ollama-部署)
- [vLLM 部署 ](#vllm-部署-)
- [微调](#微调)
-- [未来计划](#未来计划)
-- [🌟 Star History](#-star-history)
-- [引用](#引用)
+- [FAQs](#faqs)
+## MiniCPM-V 2.6
+
+**MiniCPM-V 2.6** 是 MiniCPM-V 系列中最新、性能最佳的模型。该模型基于 SigLip-400M 和 Qwen2-7B 构建,共 8B 参数。与 MiniCPM-Llama3-V 2.5 相比,MiniCPM-V 2.6 性能提升显著,并引入了多图和视频理解的新功能。MiniCPM-V 2.6 的主要特点包括:
+
+
+- 🔥 **领先的性能。**
+ MiniCPM-V 2.6 在最新版本 OpenCompass 榜单上(综合 8 个主流多模态评测基准)平均得分 65.2,**以8B量级的大小在单图理解方面超越了 GPT-4o mini、GPT-4V、Gemini 1.5 Pro 和 Claude 3.5 Sonnet 等主流商用闭源多模态大模型**。
+
+- 🖼️ **多图理解和上下文学习。**
+ MiniCPM-V 2.6 还支持**多图对话和推理**。它在 Mantis-Eval、BLINK、Mathverse mv 和 Sciverse mv 等主流多图评测基准中取得了**最佳水平**,并展现出了优秀的上下文学习能力。
+
+- 🎬 **视频理解。**
+ MiniCPM-V 2.6 还可以**接受视频输入**,进行对话和提供涵盖时序和空间信息的详细视频描述。模型在 有/无字幕 评测场景下的 Video-MME 表现均超过了 **GPT-4V、Claude 3.5 Sonnet 和 LLaVA-NeXT-Video-34B**等商用闭源模型。
+
+- 💪 **强大的 OCR 能力及其他功能。**
+ MiniCPM-V 2.6 可以处理任意长宽比的图像,像素数可达 180 万(如 1344x1344)。在 OCRBench 上取得**最佳水平,超过 GPT-4o、GPT-4V 和 Gemini 1.5 Pro 等商用闭源模型**。基于最新的 [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) 和 [VisCPM](https://github.com/OpenBMB/VisCPM) 技术,其具备了**可信的多模态行为**,在 Object HalBench 上的幻觉率显著低于 GPT-4o 和 GPT-4V,并支持英语、中文、德语、法语、意大利语、韩语等**多种语言**。
+
+- 🚀 **卓越的效率。**
+ 除了对个人用户友好的模型大小,MiniCPM-V 2.6 还表现出**最先进的视觉 token 密度**(即每个视觉 token 编码的像素数量)。它**仅需 640 个 token 即可处理 180 万像素图像,比大多数模型少 75%**。这一特性优化了模型的推理速度、首 token 延迟、内存占用和功耗。因此,MiniCPM-V 2.6 可以支持 iPad 等终端设备上的高效**实时视频理解**。
+
+- 💫 **易于使用。**
+ MiniCPM-V 2.6 可以通过多种方式轻松使用:(1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpmv-main/examples/llava/README-minicpmv2.6.md) 和 [ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) 支持在本地设备上进行高效的 CPU 推理,(2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) 和 [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) 格式的量化模型,有 16 种尺寸,(3) [vLLM](#vllm-部署-) 支持高吞吐量和内存高效的推理,(4) 针对新领域和任务进行微调,(5) 使用 [Gradio](#本地-webui-demo-) 快速设置本地 WebUI 演示,(6) 在线[demo](http://120.92.209.146:8887/)即可体验。
+
+### 性能评估
+
+

+
+
+
+点击查看 OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench 上的单图评测结果详情。
+
+
+
+
+
+ | Model |
+ Size |
+ Token Density+ |
+ OpenCompass |
+ MME |
+ MMVet |
+ OCRBench |
+ MMMU val |
+ MathVista mini |
+ MMB1.1 test |
+ AI2D |
+ TextVQA val |
+ DocVQA test |
+ HallusionBench |
+ Object HalBench |
+
+
+
+
+ | Proprietary |
+
+
+ | GPT-4o |
+ - |
+ 1088 |
+ 69.9 |
+ 2328.7 |
+ 69.1 |
+ 736 |
+ 69.2 |
+ 61.3 |
+ 82.2 |
+ 84.6 |
+ - |
+ 92.8 |
+ 55.0 |
+ 17.6 |
+
+
+ | Claude 3.5 Sonnet |
+ - |
+ 750 |
+ 67.9 |
+ 1920.0 |
+ 66.0 |
+ 788 |
+ 65.9 |
+ 61.6 |
+ 78.5 |
+ 80.2 |
+ - |
+ 95.2 |
+ 49.9 |
+ 13.8 |
+
+
+ | Gemini 1.5 Pro |
+ - |
+ - |
+ 64.4 |
+ 2110.6 |
+ 64.0 |
+ 754 |
+ 60.6 |
+ 57.7 |
+ 73.9 |
+ 79.1 |
+ 73.5 |
+ 86.5 |
+ 45.6 |
+ - |
+
+
+ | GPT-4o mini |
+ - |
+ 1088 |
+ 64.1 |
+ 2003.4 |
+ 66.9 |
+ 785 |
+ 60.0 |
+ 52.4 |
+ 76.0 |
+ 77.8 |
+ - |
+ - |
+ 46.1 |
+ 12.4 |
+
+
+ | GPT-4V |
+ - |
+ 1088 |
+ 63.5 |
+ 2070.2 |
+ 67.5 |
+ 656 |
+ 61.7 |
+ 54.7 |
+ 79.8 |
+ 78.6 |
+ 78.0 |
+ 87.2 |
+ 43.9 |
+ 14.2 |
+
+
+ | Step-1V |
+ - |
+ - |
+ 59.5 |
+ 2206.4 |
+ 63.3 |
+ 625 |
+ 49.9 |
+ 44.8 |
+ 78.0 |
+ 79.2 |
+ 71.6 |
+ - |
+ 48.4 |
+ - |
+
+
+ | Qwen-VL-Max |
+ - |
+ 784 |
+ 58.3 |
+ 2281.7 |
+ 61.8 |
+ 684 |
+ 52.0 |
+ 43.4 |
+ 74.6 |
+ 75.7 |
+ 79.5 |
+ 93.1 |
+ 41.2 |
+ 13.4 |
+
+
+ | Open-source |
+
+
+ | LLaVA-NeXT-Yi-34B |
+ 34B |
+ 157 |
+ 55.0 |
+ 2006.5 |
+ 50.7 |
+ 574 |
+ 48.8 |
+ 40.4 |
+ 77.8 |
+ 78.9 |
+ 69.3 |
+ - |
+ 34.8 |
+ 12.6 |
+
+
+ | Mini-Gemini-HD-34B |
+ 34B |
+ 157 |
+ - |
+ 2141 |
+ 59.3 |
+ 518 |
+ 48.0 |
+ 43.3 |
+ - |
+ 80.5 |
+ 74.1 |
+ 78.9 |
+ - |
+ - |
+
+
+ | Cambrian-34B |
+ 34B |
+ 1820 |
+ 58.3 |
+ 2049.9 |
+ 53.2 |
+ 591 |
+ 50.4 |
+ 50.3 |
+ 77.8 |
+ 79.5 |
+ 76.7 |
+ 75.5 |
+ 41.6 |
+ 14.7 |
+
+
+ | GLM-4V-9B |
+ 13B |
+ 784 |
+ 59.1 |
+ 2018.8 |
+ 58.0 |
+ 776 |
+ 46.9 |
+ 51.1 |
+ 67.9 |
+ 71.2 |
+ - |
+ - |
+ 45.0 |
+ - |
+
+
+ | InternVL2-8B |
+ 8B |
+ 706 |
+ 64.1 |
+ 2215.1 |
+ 54.3 |
+ 794 |
+ 51.2 |
+ 58.3 |
+ 79.4 |
+ 83.6 |
+ 77.4 |
+ 91.6 |
+ 45.0 |
+ 21.3 |
+
+
+ | MiniCPM-Llama-V 2.5 |
+ 8B |
+ 1882 |
+ 58.8 |
+ 2024.6 |
+ 52.8 |
+ 725 |
+ 45.8 |
+ 54.3 |
+ 72.0 |
+ 78.4 |
+ 76.6 |
+ 84.8 |
+ 42.4 |
+ 10.3 |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 2822 |
+ 65.2 |
+ 2348.4* |
+ 60.0 |
+ 852* |
+ 49.8* |
+ 60.6 |
+ 78.0 |
+ 82.1 |
+ 80.1 |
+ 90.8 |
+ 48.1* |
+ 8.2 |
+
+
+
+
+
+* 我们使用思维链提示词来评估这些基准。
+
++ Token Density:每个视觉 token 在最大分辨率下编码的像素数,即最大分辨率下的像素数 / 视觉 token 数。
+
+注意:闭源模型的 Token Density 由 API 收费方式估算得到。
+
+
+
+
+点击查看 Mantis Eval, BLINK, Mathverse mv, Sciverse mv, MIRB 上的多图评测结果详情。
+
+
+
+
+
+ | Model |
+ Size |
+ Mantis Eval |
+ BLINK val |
+ Mathverse mv |
+ Sciverse mv |
+ MIRB |
+
+
+
+
+ | Proprietary |
+
+
+ | GPT-4V |
+ - |
+ 62.7 |
+ 54.6 |
+ 60.3 |
+ 66.9 |
+ 53.1 |
+
+
+ | LLaVA-NeXT-Interleave-14B |
+ 14B |
+ 66.4 |
+ 52.6 |
+ 32.7 |
+ 30.2 |
+ - |
+
+
+ | Open-source |
+
+
+ | Emu2-Chat |
+ 37B |
+ 37.8 |
+ 36.2 |
+ - |
+ 27.2 |
+ - |
+
+
+ | CogVLM |
+ 17B |
+ 45.2 |
+ 41.1 |
+ - |
+ - |
+ - |
+
+
+ | VPG-C |
+ 7B |
+ 52.4 |
+ 43.1 |
+ 24.3 |
+ 23.1 |
+ - |
+
+
+ | VILA 8B |
+ 8B |
+ 51.2 |
+ 39.3 |
+ - |
+ 36.5 |
+ - |
+
+
+ | InternLM-XComposer-2.5 |
+ 8B |
+ 53.1* |
+ 48.9 |
+ 32.1* |
+ - |
+ 42.5 |
+
+
+ | InternVL2-8B |
+ 8B |
+ 59.0* |
+ 50.9 |
+ 30.5* |
+ 34.4* |
+ 56.9* |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 69.1 |
+ 53.0 |
+ 84.9 |
+ 74.9 |
+ 53.8 |
+
+
+
+
+
+
+* 正式开源模型权重的评测结果。
+
+
+
+点击查看 Video-MME 和 Video-ChatGPT 上的视频评测结果详情。
+
+
+
+
+
+ | Model |
+ Size |
+ Video-MME |
+ Video-ChatGPT |
+
+
+ |
+ |
+ w/o subs |
+ w subs |
+ Correctness |
+ Detail |
+ Context |
+ Temporal |
+ Consistency |
+
+
+
+
+ | Proprietary |
+
+
+ | Claude 3.5 Sonnet |
+ - |
+ 60.0 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | GPT-4V |
+ - |
+ 59.9 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | Open-source |
+
+
+ | LLaVA-NeXT-7B |
+ 7B |
+ - |
+ - |
+ 3.39 |
+ 3.29 |
+ 3.92 |
+ 2.60 |
+ 3.12 |
+
+
+ | LLaVA-NeXT-34B |
+ 34B |
+ - |
+ - |
+ 3.29 |
+ 3.23 |
+ 3.83 |
+ 2.51 |
+ 3.47 |
+
+
+ | CogVLM2-Video |
+ 12B |
+ - |
+ - |
+ 3.49 |
+ 3.46 |
+ 3.23 |
+ 2.98 |
+ 3.64 |
+
+
+ | LongVA |
+ 7B |
+ 52.4 |
+ 54.3 |
+ 3.05 |
+ 3.09 |
+ 3.77 |
+ 2.44 |
+ 3.64 |
+
+
+ | InternVL2-8B |
+ 8B |
+ 54.0 |
+ 56.9 |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | InternLM-XComposer-2.5 |
+ 8B |
+ 55.8 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | LLaVA-NeXT-Video |
+ 32B |
+ 60.2 |
+ 63.0 |
+ 3.48 |
+ 3.37 |
+ 3.95 |
+ 2.64 |
+ 3.28 |
+
+
+ | MiniCPM-V 2.6 |
+ 8B |
+ 60.9 |
+ 63.6 |
+ 3.59 |
+ 3.28 |
+ 3.93 |
+ 2.73 |
+ 3.62 |
+
+
+
+
+
+
+
+
+点击查看 TextVQA, VizWiz, VQAv2, OK-VQA上的少样本评测结果详情。
+
+
+
+
+
+ | Model |
+ Size |
+ Shot |
+ TextVQA val |
+ VizWiz test-dev |
+ VQAv2 test-dev |
+ OK-VQA val |
+
+
+
+
+ | Flamingo |
+ 80B |
+ 0* |
+ 35.0 |
+ 31.6 |
+ 56.3 |
+ 40.6 |
+
+
+ | 4 |
+ 36.5 |
+ 39.6 |
+ 63.1 |
+ 57.4 |
+
+
+ | 8 |
+ 37.3 |
+ 44.8 |
+ 65.6 |
+ 57.5 |
+
+
+ | IDEFICS |
+ 80B |
+ 0* |
+ 30.9 |
+ 36.0 |
+ 60.0 |
+ 45.2 |
+
+
+ | 4 |
+ 34.3 |
+ 40.4 |
+ 63.6 |
+ 52.4 |
+
+
+ | 8 |
+ 35.7 |
+ 46.1 |
+ 64.8 |
+ 55.1 |
+
+
+ | OmniCorpus |
+ 7B |
+ 0* |
+ 43.0 |
+ 49.8 |
+ 63.2 |
+ 45.5 |
+
+
+ | 4 |
+ 45.4 |
+ 51.3 |
+ 64.5 |
+ 46.5 |
+
+
+ | 8 |
+ 45.6 |
+ 52.2 |
+ 64.7 |
+ 46.6 |
+
+
+ | Emu2 |
+ 37B |
+ 0 |
+ 26.4 |
+ 40.4 |
+ 33.5 |
+ 26.7 |
+
+
+ | 4 |
+ 48.2 |
+ 54.6 |
+ 67.0 |
+ 53.2 |
+
+
+ | 8 |
+ 49.3 |
+ 54.7 |
+ 67.8 |
+ 54.1 |
+
+
+ | MM1 |
+ 30B |
+ 0 |
+ 26.2 |
+ 40.4 |
+ 48.9 |
+ 26.7 |
+
+
+ | 8 |
+ 49.3 |
+ 54.7 |
+ 70.9 |
+ 54.1 |
+
+
+ | MiniCPM-V 2.6+ |
+ 8B |
+ 0 |
+ 43.9 |
+ 33.8 |
+ 45.4 |
+ 23.9 |
+
+
+ | 4 |
+ 63.6 |
+ 60.5 |
+ 65.5 |
+ 50.1 |
+
+
+ | 8 |
+ 64.6 |
+ 63.4 |
+ 68.2 |
+ 51.4 |
+
+
+
+
+
+
+* 使用 Flamingo 方式 zero image shot 和 two additional text shots 评估零样本性能。
+
++ 我们在没有进行监督微调 (SFT) 的情况下评估预训练的模型权重 (ckpt)。
+
+
+### 典型示例
+
+
+
+ 点击查看更多示例。
+
+

+

+
+
+
+我们将 MiniCPM-V 2.6 部署在iPad Pro上,并录制了以下演示视频。
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
## MiniCPM-Llama3-V 2.5
+
+
+查看 MiniCPM-Llama3-V 2.5 的详细信息
+
**MiniCPM-Llama3-V 2.5** 是 MiniCPM-V 系列的最新版本模型,基于 SigLip-400M 和 Llama3-8B-Instruct 构建,共 8B 参数量,相较于 MiniCPM-V 2.0 性能取得较大幅度提升。MiniCPM-Llama3-V 2.5 值得关注的特点包括:
- 🔥 **领先的性能。**
@@ -396,20 +1174,8 @@
-我们将 MiniCPM-Llama3-V 2.5 部署在小米 14 Pro 上,并录制了以下演示视频。
-
-
-
-
-
-
-
-
-
-
-
-
+
## MiniCPM-V 2.0
@@ -480,7 +1246,7 @@
### Online Demo
-欢迎试用 Hugging Face Spaces 上的 [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2) Online Demo。
+欢迎试用 Online Demo: [MiniCPM-V 2.6](http://120.92.209.146:8887/) | [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2) 。
### 本地 WebUI Demo
@@ -492,10 +1258,8 @@ pip install -r requirements.txt
```shell
# 对于 NVIDIA GPU,请运行:
-python web_demo_2.5.py --device cuda
+python web_demo_2.6.py --device cuda
-# 对于搭载 MPS 的 Mac(Apple 芯片或 AMD GPU),请运行:
-PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
```
@@ -528,8 +1292,11 @@ pip install -r requirements.txt
| 模型 | 设备 | 资源 | 简介 | 下载链接 |
|:--------------|:-:|:----------:|:-------------------|:---------------:|
-| MiniCPM-Llama3-V 2.5| GPU | 19 GB | 最新版本,提供最佳的端侧多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
-| MiniCPM-Llama3-V 2.5 gguf| CPU | 5 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) |
+| MiniCPM-V 2.6| GPU | 17 GB | 最新版本,提供最佳的端侧单图、多图、视频理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6) |
+| MiniCPM-V 2.6 gguf | CPU | 6 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6-gguf) |
+| MiniCPM-V 2.6 int4 | GPU | 7 GB | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6-int4) |
+| MiniCPM-Llama3-V 2.5| GPU | 19 GB | 提供出色的端侧多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
+| MiniCPM-Llama3-V 2.5 gguf | CPU | 6 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) |
| MiniCPM-Llama3-V 2.5 int4 | GPU | 8 GB | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
| MiniCPM-V 2.0 | GPU | 8 GB | 轻量级版本,平衡计算开销和多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
| MiniCPM-V 1.0 | GPU | 7 GB | 最轻量版本, 提供最快的推理速度。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
@@ -546,30 +1313,40 @@ pip install -r requirements.txt
```python
-from chat import MiniCPMVChat, img2base64
import torch
-import json
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
torch.manual_seed(0)
-chat_model = MiniCPMVChat('openbmb/MiniCPM-Llama3-V-2_5')
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
-im_64 = img2base64('./assets/airplane.jpeg')
+image = Image.open('./assets/airplane.jpeg').convert('RGB')
# First round chat
-msgs = [{"role": "user", "content": "Tell me the model of this aircraft."}]
+question = "Tell me the model of this aircraft."
+msgs = [{'role': 'user', 'content': [image, question]}]
-inputs = {"image": im_64, "question": json.dumps(msgs)}
-answer = chat_model.chat(inputs)
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
print(answer)
# Second round chat
# pass history context of multi-turn conversation
-msgs.append({"role": "assistant", "content": answer})
-msgs.append({"role": "user", "content": "Introduce something about Airbus A380."})
+msgs.append({"role": "assistant", "content": [answer]})
+msgs.append({"role": "user", "content": ["Introduce something about Airbus A380."]})
-inputs = {"image": im_64, "question": json.dumps(msgs)}
-answer = chat_model.chat(inputs)
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
print(answer)
```
@@ -581,6 +1358,126 @@ print(answer)
"The Airbus A380 is a double-deck, wide-body, four-engine jet airliner made by Airbus. It is the world's largest passenger airliner and is known for its long-haul capabilities. The aircraft was developed to improve efficiency and comfort for passengers traveling over long distances. It has two full-length passenger decks, which can accommodate more passengers than a typical single-aisle airplane. The A380 has been operated by airlines such as Lufthansa, Singapore Airlines, and Emirates, among others. It is widely recognized for its unique design and significant impact on the aviation industry."
```
+#### 多图理解
+
+ 点击查看使用 MiniCPM-V 2.6 进行多图理解的Python示例
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+image1 = Image.open('image1.jpg').convert('RGB')
+image2 = Image.open('image2.jpg').convert('RGB')
+question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.'
+
+msgs = [{'role': 'user', 'content': [image1, image2, question]}]
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
+print(answer)
+```
+
+
+#### 少样本上下文学习
+
+
+ 点击查看使用 MiniCPM-V 2.6 进行few-shot推理的Python示例
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+question = "production date"
+image1 = Image.open('example1.jpg').convert('RGB')
+answer1 = "2023.08.04"
+image2 = Image.open('example2.jpg').convert('RGB')
+answer2 = "2007.04.24"
+image_test = Image.open('test.jpg').convert('RGB')
+
+msgs = [
+ {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]},
+ {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]},
+ {'role': 'user', 'content': [image_test, question]}
+]
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer
+)
+print(answer)
+```
+
+
+#### 视频理解
+
+ 点击查看使用 MiniCPM-V 2.6 进行视频理解的Python示例
+
+```python
+import torch
+from PIL import Image
+from transformers import AutoModel, AutoTokenizer
+from decord import VideoReader, cpu # pip install decord
+
+model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager
+model = model.eval().cuda()
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True)
+
+MAX_NUM_FRAMES=64
+
+def encode_video(video_path):
+ def uniform_sample(l, n):
+ gap = len(l) / n
+ idxs = [int(i * gap + gap / 2) for i in range(n)]
+ return [l[i] for i in idxs]
+
+ vr = VideoReader(video_path, ctx=cpu(0))
+ sample_fps = round(vr.get_avg_fps() / 1) # FPS
+ frame_idx = [i for i in range(0, len(vr), sample_fps)]
+ if len(frame_idx) > MAX_NUM_FRAMES:
+ frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)
+ frames = vr.get_batch(frame_idx).asnumpy()
+ frames = [Image.fromarray(v.astype('uint8')) for v in frames]
+ print('num frames:', len(frames))
+ return frames
+
+video_path="video_test.mp4"
+frames = encode_video(video_path)
+question = "Describe the video"
+msgs = [
+ {'role': 'user', 'content': frames + [question]},
+]
+
+# Set decode params for video
+params["use_image_id"] = False
+params["max_slice_nums"] = 2 # 如果cuda OOM且视频分辨率大于448*448可设为1
+
+answer = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer,
+ **params
+)
+print(answer)
+```
+
+
### 多卡推理
您可以通过将模型的层分布在多个低显存显卡(12 GB 或 16 GB)上,运行 MiniCPM-Llama3-V 2.5。请查看该[教程](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md),详细了解如何使用多张低显存显卡载入模型并进行推理。
@@ -643,11 +1540,14 @@ PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
### llama.cpp 部署
-MiniCPM-Llama3-V 2.5 现在支持llama.cpp啦! 用法请参考我们的fork [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv), 在手机上可以支持 6~8 token/s 的流畅推理(测试环境:Xiaomi 14 pro + Snapdragon 8 Gen 3)。
+MiniCPM-V 2.6 现在支持llama.cpp啦! 用法请参考[我们的fork llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpmv-main/examples/llava/README-minicpmv2.6.md), 在iPad上可以支持 16~18 token/s 的流畅推理(测试环境:iPad Pro + M4)。
+
+### ollama 部署
+MiniCPM-V 2.6 现在支持ollama啦! 用法请参考[我们的fork ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md), 在iPad上可以支持 16~18 token/s 的流畅推理(测试环境:iPad Pro + M4)。
### vLLM 部署
-点击查看, vLLM 现已官方支持MiniCPM-V 2.0 和 MiniCPM-Llama3-V 2.5
+点击查看, vLLM 现已官方支持MiniCPM-V 2.0 、MiniCPM-Llama3-V 2.5 和 MiniCPM-V 2.6
1. 首先克隆官方的 vLLM 库:
```shell
@@ -658,11 +1558,11 @@ git clone https://github.com/vllm-project/vllm.git
cd vllm
pip install -e .
```
-3. 安装 timm 库:
+3. 安装 timm 库: (可选,MiniCPM-V 2.0需安装)
```shell
pip install timm=0.9.10
```
-4. 运行示例代码:(如果使用本地路径的模型,请确保模型代码已更新到Hugging Face上的最新版)
+4. 运行示例代码:(注意:如果使用本地路径的模型,请确保模型代码已更新到Hugging Face上的最新版)
```shell
python examples/minicpmv_example.py
```
@@ -685,11 +1585,8 @@ python examples/minicpmv_example.py
参考文档:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md),[MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md)
-## 未来计划
-
-- [x] 支持 MiniCPM-V 系列模型微调
-- [ ] 实时多模态交互代码开源
-
+## FAQs
+点击查看 [FAQs](./docs/faqs.md)
## 模型协议
@@ -719,7 +1616,7 @@ python examples/minicpmv_example.py
[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V)
-## 🌟 Star History
+## 🌟 Star History
@@ -748,7 +1645,7 @@ python examples/minicpmv_example.py
-->
-## 引用
+## 引用
如果您觉得我们模型/代码/论文有帮助,请给我们 ⭐ 和 引用 📝,感谢!
diff --git a/assets/gif_cases/ai.gif b/assets/gif_cases/ai.gif
new file mode 100644
index 0000000..d511496
Binary files /dev/null and b/assets/gif_cases/ai.gif differ
diff --git a/assets/gif_cases/beer.gif b/assets/gif_cases/beer.gif
new file mode 100644
index 0000000..a295399
Binary files /dev/null and b/assets/gif_cases/beer.gif differ
diff --git a/assets/gif_cases/mb.gif b/assets/gif_cases/mb.gif
new file mode 100644
index 0000000..dce6274
Binary files /dev/null and b/assets/gif_cases/mb.gif differ
diff --git a/assets/gif_cases/rabbit.gif b/assets/gif_cases/rabbit.gif
new file mode 100644
index 0000000..d27d6a4
Binary files /dev/null and b/assets/gif_cases/rabbit.gif differ
diff --git a/assets/gif_cases/ticket.gif b/assets/gif_cases/ticket.gif
index b09d831..28ef137 100644
Binary files a/assets/gif_cases/ticket.gif and b/assets/gif_cases/ticket.gif differ
diff --git a/assets/gif_cases/wfh.gif b/assets/gif_cases/wfh.gif
new file mode 100644
index 0000000..7ce8440
Binary files /dev/null and b/assets/gif_cases/wfh.gif differ
diff --git a/assets/gif_cases/zoo.gif b/assets/gif_cases/zoo.gif
new file mode 100644
index 0000000..707b86e
Binary files /dev/null and b/assets/gif_cases/zoo.gif differ
diff --git a/assets/minicpmv2_6/ICL-Mem.png b/assets/minicpmv2_6/ICL-Mem.png
new file mode 100644
index 0000000..48453d5
Binary files /dev/null and b/assets/minicpmv2_6/ICL-Mem.png differ
diff --git a/assets/minicpmv2_6/ICL-elec.png b/assets/minicpmv2_6/ICL-elec.png
new file mode 100644
index 0000000..39c7dcd
Binary files /dev/null and b/assets/minicpmv2_6/ICL-elec.png differ
diff --git a/assets/minicpmv2_6/multi_img-bike.png b/assets/minicpmv2_6/multi_img-bike.png
new file mode 100644
index 0000000..0f89782
Binary files /dev/null and b/assets/minicpmv2_6/multi_img-bike.png differ
diff --git a/assets/minicpmv2_6/multi_img-code.png b/assets/minicpmv2_6/multi_img-code.png
new file mode 100644
index 0000000..e7790a6
Binary files /dev/null and b/assets/minicpmv2_6/multi_img-code.png differ
diff --git a/assets/minicpmv2_6/multi_img-menu.png b/assets/minicpmv2_6/multi_img-menu.png
new file mode 100644
index 0000000..90e78bf
Binary files /dev/null and b/assets/minicpmv2_6/multi_img-menu.png differ
diff --git a/assets/minicpmv2_6/multiling-medal.png b/assets/minicpmv2_6/multiling-medal.png
new file mode 100644
index 0000000..0aab601
Binary files /dev/null and b/assets/minicpmv2_6/multiling-medal.png differ
diff --git a/assets/minicpmv2_6/multiling-olympic.png b/assets/minicpmv2_6/multiling-olympic.png
new file mode 100644
index 0000000..0f4c594
Binary files /dev/null and b/assets/minicpmv2_6/multiling-olympic.png differ
diff --git a/assets/radar_final.png b/assets/radar_final.png
new file mode 100644
index 0000000..ac60085
Binary files /dev/null and b/assets/radar_final.png differ
diff --git a/chat.py b/chat.py
index 8dbf8ef..8d36e56 100644
--- a/chat.py
+++ b/chat.py
@@ -183,13 +183,87 @@ class MiniCPMV2_5:
)
return answer
+class MiniCPMV2_6:
+ def __init__(self, model_path, multi_gpus=False) -> None:
+
+ print('torch_version:', torch.__version__)
+ if multi_gpus: # inference on multi-gpus
+ from accelerate import load_checkpoint_and_dispatch, init_empty_weights, infer_auto_device_map
+ with init_empty_weights():
+ model = AutoModel.from_pretrained(model_path, trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16)
+
+ device_map = infer_auto_device_map(model, max_memory={0: "10GB", 1: "10GB"},
+ no_split_module_classes=['SiglipVisionTransformer', 'Qwen2DecoderLayer'])
+ device_id = device_map["llm.model.embed_tokens"]
+ device_map["llm.lm_head"] = device_id # first and last layer of llm should be in the same device
+ device_map["vpm"] = device_id
+ device_map["resampler"] = device_id
+ device_id2 = device_map["llm.model.layers.26"]
+ device_map["llm.model.layers.8"] = device_id2
+ device_map["llm.model.layers.9"] = device_id2
+ device_map["llm.model.layers.10"] = device_id2
+ device_map["llm.model.layers.11"] = device_id2
+ device_map["llm.model.layers.12"] = device_id2
+ device_map["llm.model.layers.13"] = device_id2
+ device_map["llm.model.layers.14"] = device_id2
+ device_map["llm.model.layers.15"] = device_id2
+ device_map["llm.model.layers.16"] = device_id2
+ print(device_map)
+
+ self.model = load_checkpoint_and_dispatch(model, model_path, dtype=torch.bfloat16, device_map=device_map)
+ self.model.eval()
+ else:
+ self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True,
+ attn_implementation='sdpa', torch_dtype=torch.bfloat16)
+ self.model.eval().cuda()
+
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
+
+ def chat(self, input):
+ image = None
+ if "image" in input and len(input["image"]) > 10: # legacy API
+ try:
+ image = Image.open(io.BytesIO(base64.b64decode(input['image']))).convert('RGB')
+ except Exception as e:
+ return "Image decode error"
+
+ msgs = json.loads(input["question"])
+
+ for msg in msgs:
+ contents = msg.pop('content') # support str or List[Dict]
+ if isinstance(contents, str):
+ contents = [contents]
+
+ new_cnts = []
+ for c in contents:
+ if isinstance(c, dict):
+ if c['type'] == 'text':
+ c = c['pairs']
+ elif c['type'] == 'image':
+ c = Image.open(io.BytesIO(base64.b64decode(c["pairs"]))).convert('RGB')
+ else:
+ raise ValueError("content type only support text and image.")
+ new_cnts.append(c)
+ msg['content'] = new_cnts
+ print(f'msgs: {str(msgs)}')
+
+ answer = self.model.chat(
+ image=image,
+ msgs=msgs,
+ tokenizer=self.tokenizer,
+ )
+ return answer
+
class MiniCPMVChat:
- def __init__(self, model_path) -> None:
+ def __init__(self, model_path, multi_gpus=False) -> None:
if '12B' in model_path:
self.model = OmniLMM12B(model_path)
elif 'MiniCPM-Llama3-V' in model_path:
self.model = MiniCPMV2_5(model_path)
+ elif 'MiniCPM-V-2_6' in model_path:
+ self.model = MiniCPMV2_6(model_path, multi_gpus)
else:
self.model = MiniCPMV(model_path)
diff --git a/docs/faqs.md b/docs/faqs.md
new file mode 100644
index 0000000..4cae555
--- /dev/null
+++ b/docs/faqs.md
@@ -0,0 +1,30 @@
+### FAQs
+
+
+Q: How to choose between sampling or beam search for inference
+
+In various scenarios, the quality of results obtained from beam search and sampling decoding strategies can vary. You can determine your decoding strategy based on the following aspects:
+
+If you have the following needs, consider using sampling decoding:
+
+1. You require faster inference speed.
+2. You wish for a streaming generation approach.
+3. Your task necessitates some open-ended responses.
+
+If your task is about providing deterministic answers, you might want to experiment with beam search to see if it can achieve better outcomes.
+
+
+
+
+Q: How to ensure that the model generates results of sufficient length
+
+We've observed that during multi-language inference on MiniCPM-V 2.6, the generation sometimes ends prematurely. You can improve the results by passing a `min_new_tokens` parameter.
+```python
+res = model.chat(
+ image=None,
+ msgs=msgs,
+ tokenizer=tokenizer,
+ min_new_tokens=100
+)
+```
+
diff --git a/finetune/dataset.py b/finetune/dataset.py
index 92807c3..1904b3f 100644
--- a/finetune/dataset.py
+++ b/finetune/dataset.py
@@ -105,7 +105,7 @@ def data_collator(examples, padding_value=0, max_length=2048):
}
-def conversation_to_ids(conversation, tokenizer, llm_type=None):
+def conversation_to_ids(conversation, tokenizer, llm_type=None, new_schema=False):
"""
for single image multi-turn conversation
conversation: [{'role': 'user', 'content': 'Describe this image'},
@@ -115,6 +115,10 @@ def conversation_to_ids(conversation, tokenizer, llm_type=None):
input_ids, context, raw_msg = conversation_to_ids_llama3(
conversation, tokenizer
)
+ elif llm_type == "qwen2":
+ input_ids, context, raw_msg = conversation_to_ids_qwen2(
+ conversation, tokenizer
+ )
else:
input_ids, context, raw_msg = conversation_to_ids_minicpm(
conversation, tokenizer
@@ -125,6 +129,7 @@ def conversation_to_ids(conversation, tokenizer, llm_type=None):
# build target
target = torch.full_like(ids, -100, dtype=torch.int32)
+
for i in range(1, len(ids)):
if context[i] == 0:
target[i - 1] = ids[i]
@@ -133,14 +138,21 @@ def conversation_to_ids(conversation, tokenizer, llm_type=None):
target[i - 1] = tokenizer.eot_id
else:
target[i - 1] = tokenizer.eos_id
-
+
# build image bound
- image_start_tokens = torch.where(ids == tokenizer.im_start_id)[0]
- image_start_tokens += 1
- image_end_tokens = torch.where(ids == tokenizer.im_end_id)[0]
+ if new_schema:
+ start_cond = (ids == tokenizer.im_start_id) | (ids == tokenizer.slice_start_id)
+ end_cond = (ids == tokenizer.im_end_id) | (ids == tokenizer.slice_end_id)
+ image_start_tokens = torch.where(start_cond)[0]
+ image_start_tokens += 1
+ image_end_tokens = torch.where(end_cond)[0]
+ else:
+ image_start_tokens = torch.where(ids == tokenizer.im_start_id)[0]
+ image_start_tokens += 1
+ image_end_tokens = torch.where(ids == tokenizer.im_end_id)[0]
if len(image_start_tokens) != len(image_end_tokens):
print("image start token != image end tokens")
-
+
if len(image_start_tokens) > 0:
image_bound = torch.hstack(
[image_start_tokens.unsqueeze(-1), image_end_tokens.unsqueeze(-1)]
@@ -230,6 +242,46 @@ def conversation_to_ids_llama3(conversation, tokenizer):
return input_ids, context, raw_msg
+def conversation_to_ids_qwen2(conversation, tokenizer):
+ raw_msg = ""
+ chat = []
+ context = []
+ for idx, msg in enumerate(conversation):
+ role = msg["role"]
+ message = msg["content"]
+ assert role in ["user", "assistant"]
+ if role == "user":
+ prefix = "user"
+ else:
+ prefix = "assistant"
+ chat.append({"role":prefix, "content":message})
+ raw_msg += prefix + message
+ assert set([i['role'] for i in chat]) & set(['assistant'])
+
+ ret = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=False)
+ input_ids = tokenizer.apply_chat_template(chat, tokenize=True, add_generation_prompt=False)
+ input_ids = np.array(input_ids)
+
+ start_idxs = np.where(input_ids == tokenizer.convert_tokens_to_ids('<|im_start|>'))[0]
+ assistant_idxs = np.where(input_ids == tokenizer.convert_tokens_to_ids('assistant'))[0]
+ end_idxs = np.where(input_ids == tokenizer.convert_tokens_to_ids('<|im_end|>'))[0]
+
+ context = np.ones_like(input_ids, dtype=np.int8)
+
+ for assistant_idx in assistant_idxs:
+ if assistant_idx-1 in set(start_idxs):
+ st = assistant_idx + 1
+ for end_idx in end_idxs:
+ if end_idx > st:
+ context[st: end_idx + 1] = 0
+ break
+
+ input_ids = np.hstack(input_ids)
+ context = np.hstack(context)
+ return input_ids, context, raw_msg
+
+
+
def preprocess(
image,
conversation,
@@ -256,8 +308,14 @@ def preprocess(
default_image_placeholder = (
tokenizer.im_start + tokenizer.unk_token * query_nums + tokenizer.im_end
)
+ new_schema = False
+ use_image_id = False
+ if llm_type=='qwen2':
+ new_schema = True
+ use_image_id = True
if slice_config:
images = []
+ image_id_cnt = 0
source_image, patches, best_grid = slice_image(
image,
slice_config["max_slice_nums"],
@@ -270,9 +328,11 @@ def preprocess(
for i in range(len(patches)):
for j in range(len(patches[0])):
images.append(patches[i][j])
-
+ if use_image_id:
+ image_placeholder = f'{tokenizer.im_id_start}{idx}{tokenizer.im_id_end}' + image_placeholder
+ image_id_cnt += 1
image_placeholder += get_grid_placeholder(
- tokenizer, best_grid, query_nums)
+ tokenizer, best_grid, query_nums, new_schema = new_schema)
images = [transform(i) for i in images]
else:
images = [transform(image)]
@@ -286,7 +346,7 @@ def preprocess(
image_placeholder + "\n" + conversation[0]["content"]
)
- input_dict = conversation_to_ids(conversation, tokenizer, llm_type)
+ input_dict = conversation_to_ids(conversation, tokenizer, llm_type, new_schema)
if batch_vision:
tgt_sizes = []
@@ -424,7 +484,7 @@ def split_to_patches(image, grid):
return patches
-def get_grid_placeholder(tokenizer, grid, query_num):
+def get_grid_placeholder(tokenizer, grid, query_num, new_schema=False):
image_placeholder = (
tokenizer.im_start + tokenizer.unk_token * query_num + tokenizer.im_end
)
@@ -437,7 +497,10 @@ def get_grid_placeholder(tokenizer, grid, query_num):
for j in range(cols):
lines.append(image_placeholder)
slices.append("".join(lines))
- slice_placeholder = tokenizer.slice_start + \
+ if new_schema:
+ slice_placeholder = '\n'.join(slices)
+ else:
+ slice_placeholder = tokenizer.slice_start + \
"\n".join(slices) + tokenizer.slice_end
return slice_placeholder
@@ -455,4 +518,4 @@ def reshape_by_patch(image_tensor, patch_size):
patches = patches.reshape(image_tensor.size(0), patch_size, patch_size, -1)
patches = patches.permute(0, 1, 3, 2).reshape(
image_tensor.size(0), patch_size, -1)
- return patches
+ return patches
\ No newline at end of file
diff --git a/finetune/finetune.py b/finetune/finetune.py
index 1c42a17..04cf2eb 100644
--- a/finetune/finetune.py
+++ b/finetune/finetune.py
@@ -6,6 +6,8 @@ from dataclasses import dataclass, field
from functools import partial
from typing import Dict, List, Optional, Union, Literal, Tuple
from types import MethodType
+from torchvision import transforms
+
import torch
import transformers
from accelerate.utils import DistributedType
@@ -130,6 +132,18 @@ def make_supervised_data_module(
)
+def build_transform():
+ IMAGENET_INCEPTION_MEAN = (0.5, 0.5, 0.5) # timm.data.IMAGENET_INCEPTION_MEAN
+ IMAGENET_INCEPTION_STD = (0.5, 0.5, 0.5) # timm.data.IMAGENET_INCEPTION_STD
+ return transforms.Compose(
+ [
+ transforms.ToTensor(),
+ transforms.Normalize(
+ mean=IMAGENET_INCEPTION_MEAN, std=IMAGENET_INCEPTION_STD
+ ),
+ ]
+ )
+
def get_parameter_number(model):
trainable_params, all_param = 0, 0
for param in model.parameters():
@@ -248,10 +262,11 @@ def train():
else:
batch_vision = False
+ transform_func = build_transform()
data_module = make_supervised_data_module(
tokenizer=tokenizer,
data_args=data_args,
- transform=model.transform,
+ transform=transform_func,
data_collator=data_collator,
slice_config=slice_config,
llm_type=llm_type,
diff --git a/finetune/finetune_ds.sh b/finetune/finetune_ds.sh
index 5dc3a3e..92fd577 100644
--- a/finetune/finetune_ds.sh
+++ b/finetune/finetune_ds.sh
@@ -6,12 +6,15 @@ NODE_RANK=0
MASTER_ADDR=localhost
MASTER_PORT=6001
-MODEL="openbmb/MiniCPM-Llama3-V-2_5" # or openbmb/MiniCPM-V-2
+MODEL="openbmb/MiniCPM-V-2_6"
+# or openbmb/MiniCPM-V-2, openbmb/MiniCPM-Llama3-V-2_5
# ATTENTION: specify the path to your training data, which should be a json file consisting of a list of conversations.
# See the section for finetuning in README for more information.
DATA="path/to/trainging_data"
EVAL_DATA="path/to/test_data"
-LLM_TYPE="llama3" # if use openbmb/MiniCPM-V-2, please set LLM_TYPE=minicpm
+LLM_TYPE="qwen2" # if use openbmb/MiniCPM-V-2, please set LLM_TYPE=minicpm, if use openbmb/MiniCPM-Llama3-V-2_5, please set LLM_TYPE="llama3"
+
+
DISTRIBUTED_ARGS="
--nproc_per_node $GPUS_PER_NODE \
@@ -28,10 +31,10 @@ torchrun $DISTRIBUTED_ARGS finetune.py \
--remove_unused_columns false \
--label_names "labels" \
--prediction_loss_only false \
- --bf16 false \
- --bf16_full_eval false \
- --fp16 true \
- --fp16_full_eval true \
+ --bf16 true \
+ --bf16_full_eval true \
+ --fp16 false \
+ --fp16_full_eval false \
--do_train \
--do_eval \
--tune_vision true \
@@ -40,8 +43,8 @@ torchrun $DISTRIBUTED_ARGS finetune.py \
--max_slice_nums 9 \
--max_steps 10000 \
--eval_steps 1000 \
- --output_dir output/output_minicpmv2 \
- --logging_dir output/output_minicpmv2 \
+ --output_dir output/output_minicpmv26 \
+ --logging_dir output/output_minicpmv26 \
--logging_strategy "steps" \
--per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
diff --git a/finetune/finetune_lora.sh b/finetune/finetune_lora.sh
index 96f1c09..2c12525 100644
--- a/finetune/finetune_lora.sh
+++ b/finetune/finetune_lora.sh
@@ -6,13 +6,14 @@ NODE_RANK=0
MASTER_ADDR=localhost
MASTER_PORT=6001
-MODEL="openbmb/MiniCPM-Llama3-V-2_5" # or openbmb/MiniCPM-V-2
+MODEL="openbmb/MiniCPM-V-2_6" # or openbmb/MiniCPM-V-2, openbmb/MiniCPM-Llama3-V-2_5
# ATTENTION: specify the path to your training data, which should be a json file consisting of a list of conversations.
# See the section for finetuning in README for more information.
DATA="path/to/trainging_data"
EVAL_DATA="path/to/test_data"
-LLM_TYPE="llama3" # if use openbmb/MiniCPM-V-2, please set LLM_TYPE=minicpm
-
+LLM_TYPE="qwen2"
+# if use openbmb/MiniCPM-V-2, please set LLM_TYPE=minicpm
+#if use openbmb/MiniCPM-Llama3-V-2_5, please set LLM_TYPE=llama3
DISTRIBUTED_ARGS="
--nproc_per_node $GPUS_PER_NODE \
--nnodes $NNODES \
@@ -42,12 +43,12 @@ torchrun $DISTRIBUTED_ARGS finetune.py \
--max_slice_nums 9 \
--max_steps 10000 \
--eval_steps 1000 \
- --output_dir output/output_minicpmv2_lora \
- --logging_dir output/output_minicpmv2_lora \
+ --output_dir output/output__lora \
+ --logging_dir output/output_lora \
--logging_strategy "steps" \
- --per_device_train_batch_size 2 \
+ --per_device_train_batch_size 1 \
--per_device_eval_batch_size 1 \
- --gradient_accumulation_steps 8 \
+ --gradient_accumulation_steps 1 \
--evaluation_strategy "steps" \
--save_strategy "steps" \
--save_steps 1000 \
diff --git a/finetune/readme.md b/finetune/readme.md
index 4fe6026..855b583 100644
--- a/finetune/readme.md
+++ b/finetune/readme.md
@@ -1,6 +1,76 @@
# MiniCPM-V Finetuning
+We offer the official scripts for easy finetuning of the pretrained **MiniCPM-V-2_6**, **MiniCPM-Llama3-V 2.5** and **MiniCPM-V 2.0** on downstream tasks. Our finetune scripts use transformers Trainer and DeepSpeed by default.
+
+### Data preparation
+
+To prepare your finetuning data, you should formulate each sample as a dictionary consisting of an id, an image path list with an image, and a list of conversations. Then save data samples in JSON files.
+
+For the vision-language example with image, you are required to provide **\** to define the position to insert the image embeddings. If you don't provide \, the image will be placed at the front of the conversation.
+
+
+
+ vision-language example (vl_finetune_data.json) with 1 samples.
+
+
+```
+ [
+ {
+ "id": "0",
+ "image": 'path/to/image_0.jpg',
+ "conversations": [
+ {
+ 'role': 'user',
+ 'content': '\nHow many desserts are on the white plate?'
+ },
+ {
+ 'role': 'assistant',
+ 'content': 'There are three desserts on the white plate.'
+ },
+ {
+ 'role': 'user',
+ 'content': 'What type of desserts are they?'
+ },
+ {
+ 'role': 'assistant',
+ 'content': 'The desserts are cakes with bananas and pecans on top. They share similarities with donuts, but the presence of bananas and pecans differentiates them.'
+ },
+ {
+ 'role': 'user',
+ 'content': 'What is the setting of the image?'},
+ {
+ 'role': 'assistant',
+ 'content': 'The image is set on a table top with a plate containing the three desserts.'
+ },
+ ]
+ },
+ ]
+```
+
+
+
+### Full-parameter finetuning
+
+Full-parameter parameter finetuning requires updating all parameters of LLM in the whole training process. Please specify the correct MODEL path and DATA path in the shell scripts.
+
+```shell
+MODEL="openbmb/MiniCPM-V-2_6" # or openbmb/MiniCPM-Llama3-V-2_5, openbmb/MiniCPM-V-2
+DATA="path/to/trainging_data" # json file
+EVAL_DATA="path/to/test_data" # json file
+```
+
+To launch your training, run the following script:
+
+```
+sh finetune_ds.sh
+```
+
+#### Customizing Hyperparameters
+To tailor the training process according to your specific requirements, you can adjust various hyperparameters. For comprehensive documentation on available hyperparameters and their functionalities, you can refer to the [official Transformers documentation](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments). Experimentation and fine-tuning of these parameters are essential for achieving optimal model performance tailored to your specific task and dataset.
+# MiniCPM-V Finetuning
+
+
We offer the official scripts for easy finetuning of the pretrained **MiniCPM-Llama3-V 2.5** and **MiniCPM-V 2.0** on downstream tasks. Our finetune scripts use transformers Trainer and DeepSpeed by default.
### Data preparation
@@ -55,10 +125,10 @@ For the vision-language example with image, you are required to provide **\.*)', '', answer)
+ res = res.replace('[', '')
+ res = res.replace(']', '')
+ res = res.replace('', '')
+ answer = res.replace('', '')
+ print('answer:', answer)
+ return 0, answer, None, None
+ except Exception as e:
+ print(e)
+ traceback.print_exc()
+ return -1, ERROR_MSG, None, None
+
+
+def encode_image(image):
+ if not isinstance(image, Image.Image):
+ if hasattr(image, 'path'):
+ image = Image.open(image.path).convert("RGB")
+ else:
+ image = Image.open(image.file.path).convert("RGB")
+ # resize to max_size
+ max_size = 448*16
+ if max(image.size) > max_size:
+ w,h = image.size
+ if w > h:
+ new_w = max_size
+ new_h = int(h * max_size / w)
+ else:
+ new_h = max_size
+ new_w = int(w * max_size / h)
+ image = image.resize((new_w, new_h), resample=Image.BICUBIC)
+ return image
+ ## save by BytesIO and convert to base64
+ #buffered = io.BytesIO()
+ #image.save(buffered, format="png")
+ #im_b64 = base64.b64encode(buffered.getvalue()).decode()
+ #return {"type": "image", "pairs": im_b64}
+
+
+def encode_video(video):
+ def uniform_sample(l, n):
+ gap = len(l) / n
+ idxs = [int(i * gap + gap / 2) for i in range(n)]
+ return [l[i] for i in idxs]
+
+ if hasattr(video, 'path'):
+ vr = VideoReader(video.path, ctx=cpu(0))
+ else:
+ vr = VideoReader(video.file.path, ctx=cpu(0))
+ sample_fps = round(vr.get_avg_fps() / 1) # FPS
+ frame_idx = [i for i in range(0, len(vr), sample_fps)]
+ if len(frame_idx)>MAX_NUM_FRAMES:
+ frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES)
+ video = vr.get_batch(frame_idx).asnumpy()
+ video = [Image.fromarray(v.astype('uint8')) for v in video]
+ video = [encode_image(v) for v in video]
+ print('video frames:', len(video))
+ return video
+
+
+def check_mm_type(mm_file):
+ if hasattr(mm_file, 'path'):
+ path = mm_file.path
+ else:
+ path = mm_file.file.path
+ if is_image(path):
+ return "image"
+ if is_video(path):
+ return "video"
+ return None
+
+
+def encode_mm_file(mm_file):
+ if check_mm_type(mm_file) == 'image':
+ return [encode_image(mm_file)]
+ if check_mm_type(mm_file) == 'video':
+ return encode_video(mm_file)
+ return None
+
+def make_text(text):
+ #return {"type": "text", "pairs": text} # # For remote call
+ return text
+
+def encode_message(_question):
+ files = _question.files
+ question = _question.text
+ pattern = r"\[mm_media\]\d+\[/mm_media\]"
+ matches = re.split(pattern, question)
+ message = []
+ if len(matches) != len(files) + 1:
+ gr.Warning("Number of Images not match the placeholder in text, please refresh the page to restart!")
+ assert len(matches) == len(files) + 1
+
+ text = matches[0].strip()
+ if text:
+ message.append(make_text(text))
+ for i in range(len(files)):
+ message += encode_mm_file(files[i])
+ text = matches[i + 1].strip()
+ if text:
+ message.append(make_text(text))
+ return message
+
+
+def check_has_videos(_question):
+ images_cnt = 0
+ videos_cnt = 0
+ for file in _question.files:
+ if check_mm_type(file) == "image":
+ images_cnt += 1
+ else:
+ videos_cnt += 1
+ return images_cnt, videos_cnt
+
+
+def count_video_frames(_context):
+ num_frames = 0
+ for message in _context:
+ for item in message["content"]:
+ #if item["type"] == "image": # For remote call
+ if isinstance(item, Image.Image):
+ num_frames += 1
+ return num_frames
+
+
+def respond(_question, _chat_bot, _app_cfg, params_form):
+ _context = _app_cfg['ctx'].copy()
+ _context.append({'role': 'user', 'content': encode_message(_question)})
+
+ images_cnt = _app_cfg['images_cnt']
+ videos_cnt = _app_cfg['videos_cnt']
+ files_cnts = check_has_videos(_question)
+ if files_cnts[1] + videos_cnt > 1 or (files_cnts[1] + videos_cnt == 1 and files_cnts[0] + images_cnt > 0):
+ gr.Warning("Only supports single video file input right now!")
+ return _question, _chat_bot, _app_cfg
+
+ if params_form == 'Beam Search':
+ params = {
+ 'sampling': False,
+ 'num_beams': 3,
+ 'repetition_penalty': 1.2,
+ "max_new_tokens": 2048
+ }
+ else:
+ params = {
+ 'sampling': True,
+ 'top_p': 0.8,
+ 'top_k': 100,
+ 'temperature': 0.7,
+ 'repetition_penalty': 1.05,
+ "max_new_tokens": 2048
+ }
+
+ if files_cnts[1] + videos_cnt > 0:
+ params["max_inp_length"] = 4352 # 4096+256
+ params["use_image_id"] = False
+ params["max_slice_nums"] = 1 if count_video_frames(_context) > 16 else 2
+
+ code, _answer, _, sts = chat("", _context, None, params)
+
+ images_cnt += files_cnts[0]
+ videos_cnt += files_cnts[1]
+ _context.append({"role": "assistant", "content": [make_text(_answer)]})
+ _chat_bot.append((_question, _answer))
+ if code == 0:
+ _app_cfg['ctx']=_context
+ _app_cfg['sts']=sts
+ _app_cfg['images_cnt'] = images_cnt
+ _app_cfg['videos_cnt'] = videos_cnt
+
+ upload_image_disabled = videos_cnt > 0
+ upload_video_disabled = videos_cnt > 0 or images_cnt > 0
+ return create_multimodal_input(upload_image_disabled, upload_video_disabled), _chat_bot, _app_cfg
+
+
+def fewshot_add_demonstration(_image, _user_message, _assistant_message, _chat_bot, _app_cfg):
+ ctx = _app_cfg["ctx"]
+ message_item = []
+ if _image is not None:
+ image = Image.open(_image).convert("RGB")
+ ctx.append({"role": "user", "content": [encode_image(image), make_text(_user_message)]})
+ message_item.append({"text": "[mm_media]1[/mm_media]" + _user_message, "files": [_image]})
+ else:
+ if _user_message:
+ ctx.append({"role": "user", "content": [make_text(_user_message)]})
+ message_item.append({"text": _user_message, "files": []})
+ else:
+ message_item.append(None)
+ if _assistant_message:
+ ctx.append({"role": "assistant", "content": [make_text(_assistant_message)]})
+ message_item.append({"text": _assistant_message, "files": []})
+ else:
+ message_item.append(None)
+
+ _chat_bot.append(message_item)
+ return None, "", "", _chat_bot, _app_cfg
+
+
+def fewshot_respond(_image, _user_message, _chat_bot, _app_cfg, params_form):
+ user_message_contents = []
+ _context = _app_cfg["ctx"].copy()
+ if _image:
+ image = Image.open(_image).convert("RGB")
+ user_message_contents += [encode_image(image)]
+ if _user_message:
+ user_message_contents += [make_text(_user_message)]
+ if user_message_contents:
+ _context.append({"role": "user", "content": user_message_contents})
+
+ if params_form == 'Beam Search':
+ params = {
+ 'sampling': False,
+ 'num_beams': 3,
+ 'repetition_penalty': 1.2,
+ "max_new_tokens": 2048
+ }
+ else:
+ params = {
+ 'sampling': True,
+ 'top_p': 0.8,
+ 'top_k': 100,
+ 'temperature': 0.7,
+ 'repetition_penalty': 1.05,
+ "max_new_tokens": 2048
+ }
+
+ code, _answer, _, sts = chat("", _context, None, params)
+
+ _context.append({"role": "assistant", "content": [make_text(_answer)]})
+
+ if _image:
+ _chat_bot.append([
+ {"text": "[mm_media]1[/mm_media]" + _user_message, "files": [_image]},
+ {"text": _answer, "files": []}
+ ])
+ else:
+ _chat_bot.append([
+ {"text": _user_message, "files": [_image]},
+ {"text": _answer, "files": []}
+ ])
+ if code == 0:
+ _app_cfg['ctx']=_context
+ _app_cfg['sts']=sts
+ return None, '', '', _chat_bot, _app_cfg
+
+
+def regenerate_button_clicked(_question, _image, _user_message, _assistant_message, _chat_bot, _app_cfg, params_form):
+ if len(_chat_bot) <= 1 or not _chat_bot[-1][1]:
+ gr.Warning('No question for regeneration.')
+ return '', _image, _user_message, _assistant_message, _chat_bot, _app_cfg
+ if _app_cfg["chat_type"] == "Chat":
+ images_cnt = _app_cfg['images_cnt']
+ videos_cnt = _app_cfg['videos_cnt']
+ _question = _chat_bot[-1][0]
+ _chat_bot = _chat_bot[:-1]
+ _app_cfg['ctx'] = _app_cfg['ctx'][:-2]
+ files_cnts = check_has_videos(_question)
+ images_cnt -= files_cnts[0]
+ videos_cnt -= files_cnts[1]
+ _app_cfg['images_cnt'] = images_cnt
+ _app_cfg['videos_cnt'] = videos_cnt
+ upload_image_disabled = videos_cnt > 0
+ upload_video_disabled = videos_cnt > 0 or images_cnt > 0
+ _question, _chat_bot, _app_cfg = respond(_question, _chat_bot, _app_cfg, params_form)
+ return _question, _image, _user_message, _assistant_message, _chat_bot, _app_cfg
+ else:
+ last_message = _chat_bot[-1][0]
+ last_image = None
+ last_user_message = ''
+ if last_message.text:
+ last_user_message = last_message.text
+ if last_message.files:
+ last_image = last_message.files[0].file.path
+ _chat_bot = _chat_bot[:-1]
+ _app_cfg['ctx'] = _app_cfg['ctx'][:-2]
+ _image, _user_message, _assistant_message, _chat_bot, _app_cfg = fewshot_respond(last_image, last_user_message, _chat_bot, _app_cfg, params_form)
+ return _question, _image, _user_message, _assistant_message, _chat_bot, _app_cfg
+
+
+def flushed():
+ return gr.update(interactive=True)
+
+
+def clear(txt_message, chat_bot, app_session):
+ txt_message.files.clear()
+ txt_message.text = ''
+ chat_bot = copy.deepcopy(init_conversation)
+ app_session['sts'] = None
+ app_session['ctx'] = []
+ app_session['images_cnt'] = 0
+ app_session['videos_cnt'] = 0
+ return create_multimodal_input(), chat_bot, app_session, None, '', ''
+
+
+def select_chat_type(_tab, _app_cfg):
+ _app_cfg["chat_type"] = _tab
+ return _app_cfg
+
+
+init_conversation = [
+ [
+ None,
+ {
+ # The first message of bot closes the typewriter.
+ "text": "You can talk to me now",
+ "flushing": False
+ }
+ ],
+]
+
+
+css = """
+video { height: auto !important; }
+.example label { font-size: 16px;}
+"""
+
+introduction = """
+
+## Features:
+1. Chat with single image
+2. Chat with multiple images
+3. Chat with video
+4. In-context few-shot learning
+
+Click `How to use` tab to see examples.
+"""
+
+
+with gr.Blocks(css=css) as demo:
+ with gr.Tab(model_name):
+ with gr.Row():
+ with gr.Column(scale=1, min_width=300):
+ gr.Markdown(value=introduction)
+ params_form = create_component(form_radio, comp='Radio')
+ regenerate = create_component({'value': 'Regenerate'}, comp='Button')
+ clear_button = create_component({'value': 'Clear History'}, comp='Button')
+
+ with gr.Column(scale=3, min_width=500):
+ app_session = gr.State({'sts':None,'ctx':[], 'images_cnt': 0, 'videos_cnt': 0, 'chat_type': 'Chat'})
+ chat_bot = mgr.Chatbot(label=f"Chat with {model_name}", value=copy.deepcopy(init_conversation), height=600, flushing=False, bubble_full_width=False)
+
+ with gr.Tab("Chat") as chat_tab:
+ txt_message = create_multimodal_input()
+ chat_tab_label = gr.Textbox(value="Chat", interactive=False, visible=False)
+
+ txt_message.submit(
+ respond,
+ [txt_message, chat_bot, app_session, params_form],
+ [txt_message, chat_bot, app_session]
+ )
+
+ with gr.Tab("Few Shot") as fewshot_tab:
+ fewshot_tab_label = gr.Textbox(value="Few Shot", interactive=False, visible=False)
+ with gr.Row():
+ with gr.Column(scale=1):
+ image_input = gr.Image(type="filepath", sources=["upload"])
+ with gr.Column(scale=3):
+ user_message = gr.Textbox(label="User")
+ assistant_message = gr.Textbox(label="Assistant")
+ with gr.Row():
+ add_demonstration_button = gr.Button("Add Example")
+ generate_button = gr.Button(value="Generate", variant="primary")
+ add_demonstration_button.click(
+ fewshot_add_demonstration,
+ [image_input, user_message, assistant_message, chat_bot, app_session],
+ [image_input, user_message, assistant_message, chat_bot, app_session]
+ )
+ generate_button.click(
+ fewshot_respond,
+ [image_input, user_message, chat_bot, app_session, params_form],
+ [image_input, user_message, assistant_message, chat_bot, app_session]
+ )
+
+ chat_tab.select(
+ select_chat_type,
+ [chat_tab_label, app_session],
+ [app_session]
+ )
+ chat_tab.select( # do clear
+ clear,
+ [txt_message, chat_bot, app_session],
+ [txt_message, chat_bot, app_session, image_input, user_message, assistant_message]
+ )
+ fewshot_tab.select(
+ select_chat_type,
+ [fewshot_tab_label, app_session],
+ [app_session]
+ )
+ fewshot_tab.select( # do clear
+ clear,
+ [txt_message, chat_bot, app_session],
+ [txt_message, chat_bot, app_session, image_input, user_message, assistant_message]
+ )
+ chat_bot.flushed(
+ flushed,
+ outputs=[txt_message]
+ )
+ regenerate.click(
+ regenerate_button_clicked,
+ [txt_message, image_input, user_message, assistant_message, chat_bot, app_session, params_form],
+ [txt_message, image_input, user_message, assistant_message, chat_bot, app_session]
+ )
+ clear_button.click(
+ clear,
+ [txt_message, chat_bot, app_session],
+ [txt_message, chat_bot, app_session, image_input, user_message, assistant_message]
+ )
+
+ with gr.Tab("How to use"):
+ with gr.Column():
+ with gr.Row():
+ image_example = gr.Image(value="http://thunlp.oss-cn-qingdao.aliyuncs.com/multi_modal/never_delete/m_bear2.gif", label='1. Chat with single or multiple images', interactive=False, width=400, elem_classes="example")
+ example2 = gr.Image(value="http://thunlp.oss-cn-qingdao.aliyuncs.com/multi_modal/never_delete/video2.gif", label='2. Chat with video', interactive=False, width=400, elem_classes="example")
+ example3 = gr.Image(value="http://thunlp.oss-cn-qingdao.aliyuncs.com/multi_modal/never_delete/fshot.gif", label='3. Few shot', interactive=False, width=400, elem_classes="example")
+
+
+# launch
+demo.launch(share=False, debug=True, show_api=False, server_port=8885, server_name="0.0.0.0")
+