diff --git a/README_zh.md b/README_zh.md index 342f370..b90f2b1 100644 --- a/README_zh.md +++ b/README_zh.md @@ -1,110 +1,885 @@
- - -**端侧可用的 GPT-4V 级多模态大模型** +**A GPT-4V Level MLLM for Single Image, Multi Image and Video on Your Phone** + + [中文](./README_zh.md) | + English + +Join our 💬 WeChat - 中文 | - [English](./README_en.md)

- MiniCPM-Llama3-V 2.5 🤗 🤖 | - MiniCPM-V 2.0 🤗 🤖 | - MiniCPM-V 2.0 技术博客 + MiniCPM-V 2.6 🤗 🤖 | MiniCPM-Llama3-V 2.5 🤗 🤖 | + MiniCPM-Llama3-V 2.5 Technical Report

-**MiniCPM-V**是面向图文理解的端侧多模态大模型系列。该系列模型接受图像和文本输入,并提供高质量的文本输出。自2024年2月以来,我们共发布了4个版本模型,旨在实现**领先的性能和高效的部署**,目前该系列最值得关注的模型包括: +**MiniCPM-V** is a series of end-side multimodal LLMs (MLLMs) designed for vision-language understanding. The models take image, video and text as inputs and provide high-quality text outputs. Since February 2024, we have released 5 versions of the model, aiming to achieve **strong performance and efficient deployment**. The most notable models in this series currently include: -- **MiniCPM-Llama3-V 2.5**:🔥🔥🔥 MiniCPM-V系列的最新、性能最佳模型。总参数量8B,多模态综合性能**超越 GPT-4V-1106、Gemini Pro、Claude 3、Qwen-VL-Max 等商用闭源模型**,OCR 能力及指令跟随能力进一步提升,并**支持超过30种语言**的多模态交互。通过系统使用模型量化、CPU、NPU、编译优化等高效推理技术,MiniCPM-Llama3-V 2.5 可以实现**高效的终端设备部署**。 +- **MiniCPM-V 2.6**: 🔥🔥🔥 The latest and most capable model in the MiniCPM-V series. With a total of 8B parameters, the model **surpasses GPT-4V in single image, multi-image and video understanding**. It outperforms **GPT-4o mini, Gemini 1.5 Pro and Claude 3.5 Sonnet** in single image understanding, and advances MiniCPM-Llama3-V 2.5's features such as strong OCR capability, trustworthy behavior, multilingual support, and end-side deployment. Due to its superior token density, MiniCPM-V 2.6 can for the first time support real-time video understanding on end-side devices such as iPad. -- **MiniCPM-V 2.0**:MiniCPM-V系列的最轻量级模型。总参数量2B,多模态综合性能超越 Yi-VL 34B、CogVLM-Chat 17B、Qwen-VL-Chat 10B 等更大参数规模的模型,可接受 180 万像素的任意长宽比图像输入,实现了和 Gemini Pro 相近的场景文字识别能力以及和 GPT-4V 相匹的低幻觉率。 +- **MiniCPM-V 2.0**: The lightest model in the MiniCPM-V series. With 2B parameters, it surpasses larger models such as Yi-VL 34B, CogVLM-Chat 17B, and Qwen-VL-Chat 10B in overall performance. It can accept image inputs of any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), achieving comparable performance with Gemini Pro in understanding scene-text and matches GPT-4V in low hallucination rates. +## News -## 更新日志 - -#### 📌 置顶 - - +#### 📌 Pinned * [2024.08.15] MiniCPM-V 2.6 现在支持多图像 SFT。有关更多详细信息,请参阅[finetune/README.md](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune) -* [2024.08.14] MiniCPM-V 2.6 现在可以通过 SWIFT 框架 [微调](https://github.com/modelscope/ms-swift/issues/1613) 了! -* [2024.08.10] 🚀🚀🚀 llama.cpp [官方仓库](https://github.com/ggerganov/llama.cpp)正式支持 MiniCPM-Llama3-V 2.5 啦!点击[这里](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/tree/main)查看各种大小的 GGUF 版本。 -* [2024.08.06] 🔥🔥🔥 我们开源了 MiniCPM-V 2.6,该模型在单图、多图和视频理解方面取得了优于 GPT-4V 的表现。我们还进一步提升了 MiniCPM-Llama3-V 2.5 的多项亮点能力,并首次支持了 iPad 上的实时视频理解。欢迎试用! -* [2024.08.03] MiniCPM-Llama3-V 2.5 技术报告已发布!欢迎点击[这里](https://arxiv.org/abs/2408.01800)查看。 -* [2024.07.19] MiniCPM-Llama3-V 2.5 现已支持[vLLM](#vllm-部署-) ! -* [2024.05.28] 💥 MiniCPM-Llama3-V 2.5 现在在 llama.cpp 和 ollama 中完全支持其功能!**请拉取我们最新的 fork 来使用**:[llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) & [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5)。我们还发布了各种大小的 GGUF 版本,请点击[这里](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/tree/main)查看。请注意,**目前官方仓库尚未支持 MiniCPM-Llama3-V 2.5**,我们也正积极推进将这些功能合并到 llama.cpp & ollama 官方仓库,敬请关注! -* [2024.05.28] 💫 我们现在支持 MiniCPM-Llama3-V 2.5 的 LoRA 微调,更多内存使用统计信息可以在[这里](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#model-fine-tuning-memory-usage-statistics)找到。 -* [2024.05.23] 🔍 我们添加了Phi-3-vision-128k-instruct 与 MiniCPM-Llama3-V 2.5的全面对比,包括基准测试评估、多语言能力和推理效率 🌟📊🌍🚀。点击[这里](./docs/compare_with_phi-3_vision.md)查看详细信息。 -* [2024.05.23] 🔥🔥🔥 MiniCPM-V 在 GitHub Trending 和 Hugging Face Trending 上登顶!MiniCPM-Llama3-V 2.5 Demo 被 Hugging Face 的 Gradio 官方账户推荐,欢迎点击[这里](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5)体验! +* [2024.08.14] MiniCPM-V 2.6 now also supports [fine-tuning](https://github.com/modelscope/ms-swift/issues/1613) with the SWIFT framework! +* [2024.08.10] 🚀🚀🚀 MiniCPM-Llama3-V 2.5 is now fully supported by [official](https://github.com/ggerganov/llama.cpp) llama.cpp! GGUF models of various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf). +* [2024.08.06] 🔥🔥🔥 We open-source MiniCPM-V 2.6, which outperforms GPT-4V on single image, multi-image and video understanding. It advances popular features of MiniCPM-Llama3-V 2.5, and can support real-time video understanding on iPad. Try it now! +* [2024.08.03] MiniCPM-Llama3-V 2.5 technical report is released! See [here](https://arxiv.org/abs/2408.01800). +* [2024.07.19] MiniCPM-Llama3-V 2.5 supports vLLM now! See [here](#inference-with-vllm). +* [2024.05.28] 🚀🚀🚀 MiniCPM-Llama3-V 2.5 now fully supports its feature in llama.cpp and ollama! Please pull the latest code **of our provided forks** ([llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md), [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5)). GGUF models in various sizes are available [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf/tree/main). MiniCPM-Llama3-V 2.5 series is **not supported by the official repositories yet**, and we are working hard to merge PRs. Please stay tuned! +* [2024.05.28] 💫 We now support LoRA fine-tuning for MiniCPM-Llama3-V 2.5, using only 2 V100 GPUs! See more statistics [here](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#model-fine-tuning-memory-usage-statistics). +* [2024.05.23] 🔍 We've released a comprehensive comparison between Phi-3-vision-128k-instruct and MiniCPM-Llama3-V 2.5, including benchmarks evaluations, multilingual capabilities, and inference efficiency 🌟📊🌍🚀. Click [here](./docs/compare_with_phi-3_vision.md) to view more details. +* [2024.05.23] 🔥🔥🔥 MiniCPM-V tops GitHub Trending and Hugging Face Trending! Our demo, recommended by Hugging Face Gradio’s official account, is available [here](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5). Come and try it out!
-* [2024.05.25] MiniCPM-Llama3-V 2.5 [支持流式输出和自定义系统提示词](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage)了,欢迎试用! -* [2024.05.24] 我们开源了 MiniCPM-Llama3-V 2.5 [gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf),支持 [llama.cpp](#llamacpp-部署) 推理!实现端侧 6-8 tokens/s 的流畅解码,欢迎试用! -* [2024.05.20] 我们开源了 MiniCPM-Llama3-V 2.5,增强了 OCR 能力,支持 30 多种语言,并首次在端侧实现了 GPT-4V 级的多模态能力!我们提供了[高效推理](#手机端部署)和[简易微调](./finetune/readme.md)的支持,欢迎试用! -* [2024.04.23] 我们增加了对 [vLLM](#vllm) 的支持,欢迎体验! -* [2024.04.18] 我们在 HuggingFace Space 新增了 MiniCPM-V 2.0 的 [demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-2),欢迎体验! -* [2024.04.17] MiniCPM-V 2.0 现在支持用户部署本地 [WebUI Demo](#本地webui-demo部署) 了,欢迎试用! -* [2024.04.15] MiniCPM-V 2.0 现在可以通过 SWIFT 框架 [微调](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md) 了,支持流式输出! -* [2024.04.12] 我们开源了 MiniCPM-V 2.0,该模型刷新了 OCRBench 开源模型最佳成绩,在场景文字识别能力上比肩 Gemini Pro,同时还在综合了 11 个主流多模态大模型评测基准的 OpenCompass 榜单上超过了 Qwen-VL-Chat 10B、CogVLM-Chat 17B 和 Yi-VL 34B 等更大参数规模的模型!点击这里查看 MiniCPM-V 2.0 技术博客。 -* [2024.03.14] MiniCPM-V 现在支持 SWIFT 框架下的[微调](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md)了,感谢 [Jintao](https://github.com/Jintao-Huang) 的贡献! -* [2024.03.01] MiniCPM-V 现在支持在 Mac 电脑上进行部署! -* [2024.02.01] 我们开源了 MiniCPM-V 和 OmniLMM-12B,分别可以支持高效的端侧部署和同规模领先的多模态能力! +
+Click to view more news. + +* [2024.06.03] Now, you can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs(12 GB or 16 GB) by distributing the model's layers across multiple GPUs. For more details, Check this [link](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md). +* [2024.05.25] MiniCPM-Llama3-V 2.5 now supports streaming outputs and customized system prompts. Try it [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage)! +* [2024.05.24] We release the MiniCPM-Llama3-V 2.5 [gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](#inference-with-llamacpp) inference and provides a 6~8 token/s smooth decoding on mobile phones. Try it now! +* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first end-side MLLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md). Try it now! +* [2024.04.23] MiniCPM-V-2.0 supports vLLM now! Click [here](#inference-with-vllm) to view more details. +* [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at [here](https://huggingface.co/spaces/openbmb/MiniCPM-V-2)! +* [2024.04.17] MiniCPM-V-2.0 supports deploying [WebUI Demo](#webui-demo) now! +* [2024.04.15] MiniCPM-V-2.0 now also supports [fine-tuning](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md) with the SWIFT framework! +* [2024.04.12] We open-source MiniCPM-V 2.0, which achieves comparable performance with Gemini Pro in understanding scene text and outperforms strong Qwen-VL-Chat 9.6B and Yi-VL 34B on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. Click here to view the MiniCPM-V 2.0 technical blog. +* [2024.03.14] MiniCPM-V now supports [fine-tuning](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md) with the SWIFT framework. Thanks to [Jintao](https://github.com/Jintao-Huang) for the contribution! +* [2024.03.01] MiniCPM-V now can be deployed on Mac! +* [2024.02.01] We open-source MiniCPM-V and OmniLMM-12B, which support efficient end-side deployment and powerful multimodal capabilities correspondingly. +
-## 目录 +## Contents + +- [MiniCPM-V 2.6](#minicpm-v-26) - [MiniCPM-Llama3-V 2.5](#minicpm-llama3-v-25) - [MiniCPM-V 2.0](#minicpm-v-20) -- [Online Demo](#online-demo) -- [安装](#安装) -- [推理](#推理) - - [模型库](#模型库) - - [多轮对话](#多轮对话) - - [Mac 推理](#mac-推理) - - [手机端部署](#手机端部署) - - [本地WebUI Demo部署](#本地webui-demo部署) - - [llama.cpp 部署](#llamacpp-部署) - - [vLLM 部署 ](#vllm-部署-) -- [微调](#微调) -- [未来计划](#未来计划) -- [🌟 Star History](#-star-history) -- [引用](#引用) +- [Chat with Our Demo on Gradio 🤗](#chat-with-our-demo-on-gradio-) +- [Install](#install) +- [Inference](#inference) + - [Model Zoo](#model-zoo) + - [Multi-turn Conversation](#multi-turn-conversation) + - [Chat with multiple images](#chat-with-multiple-images) + - [In-context few-shot learning](#in-context-few-shot-learning) + - [Chat with video](#chat-with-video) + - [Inference on Multiple GPUs](#inference-on-multiple-gpus) + - [Inference on Mac](#inference-on-mac) + - [Deployment on Mobile Phone](#deployment-on-mobile-phone) + - [Inference with llama.cpp](#inference-with-llamacpp) + - [Inference with ollama](#inference-with-ollama) + - [Inference with vLLM](#inference-with-vllm) +- [Fine-tuning](#fine-tuning) +- [FAQs](#faqs) -## MiniCPM-Llama3-V 2.5 -**MiniCPM-Llama3-V 2.5** 是 MiniCPM-V 系列的最新版本模型,基于 SigLip-400M 和 Llama3-8B-Instruct 构建,共 8B 参数量,相较于 MiniCPM-V 2.0 性能取得较大幅度提升。MiniCPM-Llama3-V 2.5 值得关注的特点包括: +## MiniCPM-V 2.6 -- 🔥 **领先的性能。** - MiniCPM-Llama3-V 2.5 在综合了 11 个主流多模态大模型评测基准的 OpenCompass 榜单上平均得分 65.1,**以 8B 量级的大小超过了 GPT-4V-1106、Gemini Pro、Claude 3、Qwen-VL-Max 等主流商用闭源多模态大模型**,大幅超越基于Llama 3构建的其他多模态大模型。 +**MiniCPM-V 2.6** is the latest and most capable model in the MiniCPM-V series. The model is built on SigLip-400M and Qwen2-7B with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-Llama3-V 2.5, and introduces new features for multi-image and video understanding. Notable features of MiniCPM-V 2.6 include: -- 💪 **优秀的 OCR 能力。** - MiniCPM-Llama3-V 2.5 可接受 180 万像素的任意宽高比图像输入,**OCRBench 得分达到 725,超越 GPT-4o、GPT-4V、Gemini Pro、Qwen-VL-Max 等商用闭源模型**,达到最佳水平。基于近期用户反馈建议,MiniCPM-Llama3-V 2.5 增强了全文 OCR 信息提取、表格图像转 markdown 等高频实用能力,并且进一步加强了指令跟随、复杂推理能力,带来更好的多模态交互体感。 +- 🔥 **Leading Performance.** + MiniCPM-V 2.6 achieves an average score of 65.2 on the latest version of OpenCompass, a comprehensive evaluation over 8 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4o mini, GPT-4V, Gemini 1.5 Pro, and Claude 3.5 Sonnet** for single image understanding. - -- 🏆 **可信行为。** - 借助最新的 [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) 对齐技术([RLHF-V](https://github.com/RLHF-V/) [CVPR'24]系列的最新技术),MiniCPM-Llama3-V 2.5 具有更加可信的多模态行为,在 Object HalBench 的幻觉率降低到了 **10.3%**,显著低于 GPT-4V-1106 (13.6%),达到开源社区最佳水平。[数据集已发布](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset)。 +- 🖼️ **Multi Image Understanding and In-context Learning.** MiniCPM-V 2.6 can also perform **conversation and reasoning over multiple images**. It achieves **state-of-the-art performance** on popular multi-image benchmarks such as Mantis-Eval, BLINK, Mathverse mv and Sciverse mv, and also shows promising in-context learning capability. -- 🌏 **多语言支持。** - 得益于 Llama 3 强大的多语言能力和 VisCPM 的跨语言泛化技术,MiniCPM-Llama3-V 2.5 在中英双语多模态能力的基础上,仅通过少量翻译的多模态数据的指令微调,高效泛化支持了**德语、法语、西班牙语、意大利语、韩语等 30+ 种语言**的多模态能力,并表现出了良好的多语言多模态对话性能。[查看所有支持语言](./assets/minicpm-llama-v-2-5_languages.md) +- 🎬 **Video Understanding.** MiniCPM-V 2.6 can also **accept video inputs**, performing conversation and providing dense captions for spatial-temporal information. It outperforms **GPT-4V, Claude 3.5 Sonnet and LLaVA-NeXT-Video-34B** on Video-MME with/without subtitles. -- 🚀 **高效部署。** - MiniCPM-Llama3-V 2.5 较为系统地通过**模型量化、CPU、NPU、编译优化**等高效加速技术,实现高效的终端设备部署。对于高通芯片的移动手机,我们首次将 NPU 加速框架 QNN 整合进了 llama.cpp。经过系统优化后,MiniCPM-Llama3-V 2.5 实现了多模态大模型端侧**语言解码速度 3 倍加速**、**图像编码 150 倍加速**的巨大提升。 +- 💪 **Strong OCR Capability and Others.** + MiniCPM-V 2.6 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344). It achieves **state-of-the-art performance on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V, and Gemini 1.5 Pro**. + Based on the the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) and [VisCPM](https://github.com/OpenBMB/VisCPM) techniques, it features **trustworthy behaviors**, with significantly lower hallucination rates than GPT-4o and GPT-4V on Object HalBench, and supports **multilingual capabilities** on English, Chinese, German, French, Italian, Korean, etc. +- 🚀 **Superior Efficiency.** + In addition to its friendly size, MiniCPM-V 2.6 also shows **state-of-the-art token density** (i.e., number of pixels encoded into each visual token). **It produces only 640 tokens when processing a 1.8M pixel image, which is 75% fewer than most models**. This directly improves the inference speed, first-token latency, memory usage, and power consumption. As a result, MiniCPM-V 2.6 can efficiently support **real-time video understanding** on end-side devices such as iPad. -### 性能评估 +- 💫 **Easy Usage.** +MiniCPM-V 2.6 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpmv-main/examples/llava/README-minicpmv2.6.md) and [ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) support for efficient CPU inference on local devices, (2) [int4](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4) and [GGUF](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf) format quantized models in 16 sizes, (3) [vLLM](#inference-with-vllm) support for high-throughput and memory-efficient inference, (4) fine-tuning on new domains and tasks, (5) quick local WebUI demo setup with [Gradio](#chat-with-our-demo-on-gradio), and (6) online web [demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-2_6). +### Evaluation
- + +
+ +
+Click to view single image results on OpenCompass, MME, MMVet, OCRBench, MMMU, MathVista, MMB, AI2D, TextVQA, DocVQA, HallusionBench, Object HalBench. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeToken Density+OpenCompassMMEMMVetOCRBenchMMMU valMathVista miniMMB1.1 testAI2DTextVQA valDocVQA testHallusionBenchObject HalBench
Proprietary
GPT-4o-108869.92328.769.173669.261.382.284.6-92.855.017.6
Claude 3.5 Sonnet-75067.91920.066.078865.961.678.580.2-95.249.913.8
Gemini 1.5 Pro--64.42110.664.075460.657.773.979.173.586.545.6-
GPT-4o mini-108864.12003.466.978560.052.476.077.8--46.112.4
GPT-4V-108863.52070.267.565661.754.779.878.678.087.243.914.2
Step-1V--59.52206.463.362549.944.878.079.271.6-48.4-
Qwen-VL-Max-78458.32281.761.868452.043.474.675.779.593.141.213.4
Open-source
LLaVA-NeXT-Yi-34B34B15755.02006.550.757448.840.477.878.969.3-34.812.6
Mini-Gemini-HD-34B34B157-2141.059.351848.043.3-80.574.178.9--
Cambrian-34B34B182058.32049.953.259150.450.377.879.576.775.541.614.7
GLM-4V-9B13B78459.12018.858.077646.951.167.971.2--45.0-
InternVL2-8B8B70664.12215.154.379451.258.379.483.677.491.645.021.3
MiniCPM-Llama-V 2.58B188258.82024.652.872545.854.372.078.476.684.842.410.3
MiniCPM-V 2.68B282265.22348.4*60.0852*49.8*60.678.082.180.190.848.1*8.2
+ +
+* We evaluate this benchmark using chain-of-thought prompting. Specifically, for MME, we used this technique only for the Cognition set. + ++ Token Density: number of pixels encoded into each visual token at maximum resolution, i.e., # pixels at maximum resolution / # visual tokens. + +Note: For proprietary models, we calculate token density based on the image encoding charging strategy defined in the official API documentation, which provides an upper-bound estimation. + +
+ + +
+Click to view multi-image results on Mantis Eval, BLINK, Mathverse mv, Sciverse mv, MIRB. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeMantis EvalBLINK valMathverse mvSciverse mvMIRB
Proprietary
GPT-4V-62.754.660.366.953.1
LLaVA-NeXT-Interleave-14B14B66.452.632.730.2-
Open-source
Emu2-Chat37B37.836.2-27.2-
CogVLM17B45.241.1---
VPG-C7B52.443.124.323.1-
VILA 8B8B51.239.3-36.5-
InternLM-XComposer-2.58B53.1*48.932.1*-42.5
InternVL2-8B8B59.0*50.930.5*34.4*56.9*
MiniCPM-V 2.68B69.153.084.974.953.8
+ +
+* We evaluate the officially released checkpoint by ourselves. +
+ +
+Click to view video results on Video-MME and Video-ChatGPT. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeVideo-MMEVideo-ChatGPT
w/o subsw subsCorrectnessDetailContextTemporalConsistency
Proprietary
Claude 3.5 Sonnet-60.0------
GPT-4V-59.9------
Open-source
LLaVA-NeXT-7B7B--3.393.293.922.603.12
LLaVA-NeXT-34B34B--3.293.233.832.513.47
CogVLM2-Video12B--3.493.463.232.983.64
LongVA7B52.454.33.053.093.772.443.64
InternVL2-8B8B54.056.9-----
InternLM-XComposer-2.58B55.8------
LLaVA-NeXT-Video32B60.263.03.483.373.952.643.28
MiniCPM-V 2.68B60.963.63.593.283.932.733.62
+
+
+ + +
+Click to view few-shot results on TextVQA, VizWiz, VQAv2, OK-VQA. +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeShotTextVQA valVizWiz test-devVQAv2 test-devOK-VQA val
Flamingo80B0*35.031.656.340.6
436.539.663.157.4
837.344.865.657.5
IDEFICS80B0*30.936.060.045.2
434.340.463.652.4
835.746.164.855.1
OmniCorpus7B0*43.049.863.245.5
445.451.364.546.5
845.652.264.746.6
Emu237B026.440.433.526.7
448.254.667.053.2
849.354.767.854.1
MM130B026.240.448.926.7
849.354.770.954.1
MiniCPM-V 2.6+8B043.933.845.423.9
463.660.565.550.1
864.663.468.251.4
+ + +
+* denotes zero image shot and two additional text shots following Flamingo. + ++ We evaluate the pretraining ckpt without SFT. +
+ +### Examples + +
+ Bike + Menu + Code + Mem + medal
-TextVQA, DocVQA, OCRBench, OpenCompass MultiModal Avg Score, MME, MMBench, MMMU, MathVista, LLaVA Bench, RealWorld QA, Object HalBench上的详细评测结果。 + Click to view more cases. +
+ elec + Menu +
+
+ +We deploy MiniCPM-V 2.6 on end devices. The demo video is the raw screen recording on a iPad Pro without edition. + + +

+ +      + +

+
+ + +

+ +      + +

+
+ + +

+ + +

+
+ +## MiniCPM-Llama3-V 2.5 + +
+Click to view more details of MiniCPM-Llama3-V 2.5 + +**MiniCPM-Llama3-V 2.5** is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.0. Notable features of MiniCPM-Llama3-V 2.5 include: + +- 🔥 **Leading Performance.** + MiniCPM-Llama3-V 2.5 has achieved an average score of 65.1 on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. **With only 8B parameters, it surpasses widely used proprietary models like GPT-4V-1106, Gemini Pro, Claude 3 and Qwen-VL-Max** and greatly outperforms other Llama 3-based MLLMs. + +- 💪 **Strong OCR Capabilities.** + MiniCPM-Llama3-V 2.5 can process images with any aspect ratio and up to 1.8 million pixels (e.g., 1344x1344), achieving a **700+ score on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V-0409, Qwen-VL-Max and Gemini Pro**. Based on recent user feedback, MiniCPM-Llama3-V 2.5 has now enhanced full-text OCR extraction, table-to-markdown conversion, and other high-utility capabilities, and has further strengthened its instruction-following and complex reasoning abilities, enhancing multimodal interaction experiences. + +- 🏆 **Trustworthy Behavior.** + Leveraging the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) method (the newest technique in the [RLHF-V](https://github.com/RLHF-V) [CVPR'24] series), MiniCPM-Llama3-V 2.5 exhibits more trustworthy behavior. It achieves a **10.3%** hallucination rate on Object HalBench, lower than GPT-4V-1106 (13.6%), achieving the best-level performance within the open-source community. [Data released](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset). + +- 🌏 **Multilingual Support.** + Thanks to the strong multilingual capabilities of Llama 3 and the cross-lingual generalization technique from [VisCPM](https://github.com/OpenBMB/VisCPM), MiniCPM-Llama3-V 2.5 extends its bilingual (Chinese-English) multimodal capabilities to **over 30 languages including German, French, Spanish, Italian, Korean etc.** [All Supported Languages](./assets/minicpm-llama-v-2-5_languages.md). + +- 🚀 **Efficient Deployment.** + MiniCPM-Llama3-V 2.5 systematically employs **model quantization, CPU optimizations, NPU optimizations and compilation optimizations**, achieving high-efficiency deployment on end-side devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a **150x acceleration in end-side MLLM image encoding** and a **3x speedup in language decoding**. + +- 💫 **Easy Usage.** +MiniCPM-Llama3-V 2.5 can be easily used in various ways: (1) [llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) and [ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5) support for efficient CPU inference on local devices, (2) [GGUF](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) format quantized models in 16 sizes, (3) efficient [LoRA](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning) fine-tuning with only 2 V100 GPUs, (4) [streaming output](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage), (5) quick local WebUI demo setup with [Gradio](https://github.com/OpenBMB/MiniCPM-V/blob/main/web_demo_2.5.py) and [Streamlit](https://github.com/OpenBMB/MiniCPM-V/blob/main/web_demo_streamlit-2_5.py), and (6) interactive demos on [HuggingFace Spaces](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5). + +### Evaluation + +
+ +
+
+Click to view results on TextVQA, DocVQA, OCRBench, OpenCompass, MME, MMBench, MMMU, MathVista, LLaVA Bench, RealWorld QA, Object HalBench.
@@ -126,7 +901,7 @@ - + @@ -298,13 +1073,13 @@ - + - + @@ -376,72 +1151,58 @@
Object HalBench
Proprietary
8.4B - --78.2 - 1971.5 - - 41.7-37.5 80.1 60.0 -
+
-* 正式开源模型权重的评测结果。 +* We evaluate the officially released checkpoint by ourselves. +
- +
- 多语言LLaVA Bench评测结果 + Evaluation results of multilingual LLaVA Bench
+### Examples -### 典型示例 - -

- -

+
+

+ +

-我们将 MiniCPM-Llama3-V 2.5 部署在小米 14 Pro 上,并录制了以下演示视频。 +
- -

- - -

-
- - -

- -

-
## MiniCPM-V 2.0
-查看 MiniCPM-V 2.0 的详细信息 - -**MiniCPM-V 2.0**可以高效部署到终端设备。该模型基于 SigLip-400M 和 [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/)构建,通过perceiver resampler连接。其特点包括: - -- 🔥 **优秀的性能。** - - MiniCPM-V 2.0 在多个测试基准(如 OCRBench, TextVQA, MME, MMB, MathVista 等)中实现了 7B 以下模型的**最佳性能**。**在综合了 11 个主流多模态大模型评测基准的 OpenCompass 榜单上超过了 Qwen-VL-Chat 9.6B、CogVLM-Chat 17.4B 和 Yi-VL 34B 等更大参数规模的模型**。MiniCPM-V 2.0 还展现出**领先的 OCR 能力**,在场景文字识别能力上**接近 Gemini Pro**,OCRBench 得分达到**开源模型第一**。 - - -- 🏆 **可信行为。** - - 多模态大模型深受幻觉问题困扰,模型经常生成和图像中的事实不符的文本。MiniCPM-V 2.0 是 **第一个通过多模态 RLHF 对齐的端侧多模态大模型**(借助 [RLHF-V](https://rlhf-v.github.io/) [CVPR'24] 系列技术)。该模型在 [Object HalBench](https://arxiv.org/abs/2312.00849) 达到**和 GPT-4V 相仿**的性能。 +Click to view more details of MiniCPM-V 2.0 -- 🌟 **高清图像高效编码。** +**MiniCPM-V 2.0** is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Our latest version, MiniCPM-V 2.0 has several notable features. - MiniCPM-V 2.0 可以接受 **180 万像素的任意长宽比图像输入**(基于最新的[LLaVA-UHD](https://arxiv.org/pdf/2403.11703.pdf) 技术),这使得模型可以感知到小物体、密集文字等更加细粒度的视觉信息。 +- 🔥 **State-of-the-art Performance.** + MiniCPM-V 2.0 achieves **state-of-the-art performance** on multiple benchmarks (including OCRBench, TextVQA, MME, MMB, MathVista, etc) among models under 7B parameters. It even **outperforms strong Qwen-VL-Chat 9.6B, CogVLM-Chat 17.4B, and Yi-VL 34B on OpenCompass, a comprehensive evaluation over 11 popular benchmarks**. Notably, MiniCPM-V 2.0 shows **strong OCR capability**, achieving **comparable performance to Gemini Pro in scene-text understanding**, and **state-of-the-art performance on OCRBench** among open-source models. -- ⚡️ **高效部署。** +- 🏆 **Trustworthy Behavior.** - MiniCPM-V 2.0 可以**高效部署在大多数消费级显卡和个人电脑上**,包括**移动手机等终端设备**。在视觉编码方面,我们通过perceiver resampler将图像表示压缩为更少的 token。这使得 MiniCPM-V 2.0 即便是**面对高分辨率图像,也能占用较低的存储并展现优秀的推理速度**。 + LMMs are known for suffering from hallucination, often generating text not factually grounded in images. MiniCPM-V 2.0 is **the first end-side LMM aligned via multimodal RLHF for trustworthy behavior** (using the recent [RLHF-V](https://rlhf-v.github.io/) [CVPR'24] series technique). This allows the model to **match GPT-4V in preventing hallucinations** on Object HalBench. -- 🙌 **双语支持。** +- 🌟 **High-Resolution Images at Any Aspect Raito.** - MiniCPM-V 2.0 **提供领先的中英双语多模态能力支持**。 - 该能力通过 [VisCPM](https://arxiv.org/abs/2308.12038) [ICLR'24] 论文中提出的多模态能力的跨语言泛化技术实现。 + MiniCPM-V 2.0 can accept **1.8 million pixels (e.g., 1344x1344) images at any aspect ratio**. This enables better perception of fine-grained visual information such as small objects and optical characters, which is achieved via a recent technique from [LLaVA-UHD](https://arxiv.org/pdf/2403.11703.pdf). -### 典型示例 +- ⚡️ **High Efficiency.** + MiniCPM-V 2.0 can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. For visual encoding, we compress the image representations into much fewer tokens via a perceiver resampler. This allows MiniCPM-V 2.0 to operate with **favorable memory cost and speed during inference even when dealing with high-resolution images**. + +- 🙌 **Bilingual Support.** + + MiniCPM-V 2.0 **supports strong bilingual multimodal capabilities in both English and Chinese**. This is enabled by generalizing multimodal capabilities across languages, a technique from [VisCPM](https://arxiv.org/abs/2308.12038) [ICLR'24]. + +### Examples

@@ -449,7 +1210,7 @@

-我们将 MiniCPM-V 2.0 部署在小米 14 Pro 上,并录制了以下演示视频,未经任何视频剪辑。 +We deploy MiniCPM-V 2.0 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro without edition.

@@ -460,61 +1221,79 @@ +## Legacy Models - - -## 历史版本模型 - - -| 模型 | 介绍信息和使用教程 | +| Model | Introduction and Guidance | |:----------------------|:-------------------:| -| MiniCPM-V 1.0 | [文档](./minicpm_v1.md) | -| OmniLMM-12B | [文档](./omnilmm.md) | +| MiniCPM-V 1.0 | [Document](./minicpm_v1.md) | +| OmniLMM-12B | [Document](./omnilmm_en.md) | -## Online Demo +## Chat with Our Demo on Gradio 🤗 -欢迎通过以下链接使用我们的网页端推理服务: [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2). +We provide online and local demos powered by Hugging Face Gradio , the most popular model deployment framework nowadays. It supports streaming outputs, progress bars, queuing, alerts, and other useful features. -## 安装 -1. 克隆我们的仓库并跳转到相应目录 +### Online Demo + +Click here to try out the online demo of [MiniCPM-V 2.6](https://huggingface.co/spaces/openbmb/MiniCPM-V-2_6) | [MiniCPM-Llama3-V 2.5](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5) | [MiniCPM-V 2.0](https://huggingface.co/spaces/openbmb/MiniCPM-V-2). + +### Local WebUI Demo + +You can easily build your own local WebUI demo with Gradio using the following commands. + +```shell +pip install -r requirements.txt +``` + +```shell +# For NVIDIA GPUs, run: +python web_demo_2.6.py --device cuda + +``` + + +## Install + +1. Clone this repository and navigate to the source folder ```bash git clone https://github.com/OpenBMB/MiniCPM-V.git cd MiniCPM-V ``` -1. 创建 conda 环境 +2. Create conda environment ```Shell -conda create -n MiniCPMV python=3.10 -y -conda activate MiniCPMV +conda create -n MiniCPM-V python=3.10 -y +conda activate MiniCPM-V ``` -3. 安装依赖 +3. Install dependencies ```shell pip install -r requirements.txt ``` -## 推理 +## Inference -### 模型库 -| 模型 | 设备 | 资源 |          简介 | 下载链接 | -|:--------------|:-:|:----------:|:-------------------|:---------------:| -| MiniCPM-Llama3-V 2.5| GPU | 19 GB | 最新版本,提供最佳的端侧多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) | -| MiniCPM-Llama3-V 2.5 gguf| CPU | 5 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) | -| MiniCPM-Llama3-V 2.5 int4 | GPU | 8 GB | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) | -| MiniCPM-V 2.0 | GPU | 8 GB | 轻量级版本,平衡计算开销和多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) | -| MiniCPM-V 1.0 | GPU | 7 GB | 最轻量版本, 提供最快的推理速度。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V) | +### Model Zoo -更多[历史版本模型](#legacy-models) +| Model | Device | Memory |          Description | Download | +|:-----------|:--:|:-----------:|:-------------------|:---------------:| +| MiniCPM-V 2.6| GPU | 17 GB | The latest version, achieving state-of-the-art end-side performance for single image, multi-image and video understanding. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6) | +| MiniCPM-V 2.6 gguf | CPU | 6 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6-gguf)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6-gguf) | +| MiniCPM-V 2.6 int4 | GPU | 7 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2_6-int4)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2_6-int4) | +| MiniCPM-Llama3-V 2.5 | GPU | 19 GB | Strong end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) | +| MiniCPM-Llama3-V 2.5 gguf | CPU | 6 GB | The gguf version, lower memory usage and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf)   [](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-gguf) | +| MiniCPM-Llama3-V 2.5 int4 | GPU | 8 GB | The int4 quantized version, lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) | +| MiniCPM-V 2.0 | GPU | 8 GB | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) | +| MiniCPM-V 1.0 | GPU | 7 GB | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V)    [](https://modelscope.cn/models/OpenBMB/MiniCPM-V) | -### 多轮对话 +### Multi-turn Conversation -请参考以下代码进行推理。 +Please refer to the following codes to run.

@@ -522,34 +1301,44 @@ pip install -r requirements.txt ```python -from chat import MiniCPMVChat, img2base64 import torch -import json +from PIL import Image +from transformers import AutoModel, AutoTokenizer torch.manual_seed(0) -chat_model = MiniCPMVChat('openbmb/MiniCPM-Llama3-V-2_5') +model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, + attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager +model = model.eval().cuda() +tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) -im_64 = img2base64('./assets/airplane.jpeg') +image = Image.open('./assets/airplane.jpeg').convert('RGB') # First round chat -msgs = [{"role": "user", "content": "Tell me the model of this aircraft."}] +question = "Tell me the model of this aircraft." +msgs = [{'role': 'user', 'content': [image, question]}] -inputs = {"image": im_64, "question": json.dumps(msgs)} -answer = chat_model.chat(inputs) +answer = model.chat( + image=None, + msgs=msgs, + tokenizer=tokenizer +) print(answer) # Second round chat # pass history context of multi-turn conversation -msgs.append({"role": "assistant", "content": answer}) -msgs.append({"role": "user", "content": "Introduce something about Airbus A380."}) +msgs.append({"role": "assistant", "content": [answer]}) +msgs.append({"role": "user", "content": ["Introduce something about Airbus A380."]}) -inputs = {"image": im_64, "question": json.dumps(msgs)} -answer = chat_model.chat(inputs) +answer = model.chat( + image=None, + msgs=msgs, + tokenizer=tokenizer +) print(answer) ``` -可以得到以下输出: +You will get the following output: ``` "The aircraft in the image is an Airbus A380, which can be identified by its large size, double-deck structure, and the distinctive shape of its wings and engines. The A380 is a wide-body aircraft known for being the world's largest passenger airliner, designed for long-haul flights. It has four engines, which are characteristic of large commercial aircraft. The registration number on the aircraft can also provide specific information about the model if looked up in an aviation database." @@ -557,15 +1346,137 @@ print(answer) "The Airbus A380 is a double-deck, wide-body, four-engine jet airliner made by Airbus. It is the world's largest passenger airliner and is known for its long-haul capabilities. The aircraft was developed to improve efficiency and comfort for passengers traveling over long distances. It has two full-length passenger decks, which can accommodate more passengers than a typical single-aisle airplane. The A380 has been operated by airlines such as Lufthansa, Singapore Airlines, and Emirates, among others. It is widely recognized for its unique design and significant impact on the aviation industry." ``` - - - -### Mac 推理 +#### Chat with multiple images
-点击查看 MiniCPM-Llama3-V 2.5 / MiniCPM-V 2.0 基于Mac MPS运行 (Apple silicon 或 AMD GPUs)的示例。 + Click to view Python code running MiniCPM-V 2.6 with multiple images input. + +```python +import torch +from PIL import Image +from transformers import AutoModel, AutoTokenizer + +model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, + attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager +model = model.eval().cuda() +tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) + +image1 = Image.open('image1.jpg').convert('RGB') +image2 = Image.open('image2.jpg').convert('RGB') +question = 'Compare image 1 and image 2, tell me about the differences between image 1 and image 2.' + +msgs = [{'role': 'user', 'content': [image1, image2, question]}] + +answer = model.chat( + image=None, + msgs=msgs, + tokenizer=tokenizer +) +print(answer) +``` +
+ +#### In-context few-shot learning +
+ Click to view Python code running MiniCPM-V 2.6 with few-shot input. ```python -# test.py Need more than 16GB memory to run. +import torch +from PIL import Image +from transformers import AutoModel, AutoTokenizer + +model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, + attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager +model = model.eval().cuda() +tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) + +question = "production date" +image1 = Image.open('example1.jpg').convert('RGB') +answer1 = "2023.08.04" +image2 = Image.open('example2.jpg').convert('RGB') +answer2 = "2007.04.24" +image_test = Image.open('test.jpg').convert('RGB') + +msgs = [ + {'role': 'user', 'content': [image1, question]}, {'role': 'assistant', 'content': [answer1]}, + {'role': 'user', 'content': [image2, question]}, {'role': 'assistant', 'content': [answer2]}, + {'role': 'user', 'content': [image_test, question]} +] + +answer = model.chat( + image=None, + msgs=msgs, + tokenizer=tokenizer +) +print(answer) +``` +
+ +#### Chat with video +
+ Click to view Python code running MiniCPM-V 2.6 with video input. + +```python +import torch +from PIL import Image +from transformers import AutoModel, AutoTokenizer +from decord import VideoReader, cpu # pip install decord + +model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True, + attn_implementation='sdpa', torch_dtype=torch.bfloat16) # sdpa or flash_attention_2, no eager +model = model.eval().cuda() +tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2_6', trust_remote_code=True) + +MAX_NUM_FRAMES=64 # if cuda OOM set a smaller number + +def encode_video(video_path): + def uniform_sample(l, n): + gap = len(l) / n + idxs = [int(i * gap + gap / 2) for i in range(n)] + return [l[i] for i in idxs] + + vr = VideoReader(video_path, ctx=cpu(0)) + sample_fps = round(vr.get_avg_fps() / 1) # FPS + frame_idx = [i for i in range(0, len(vr), sample_fps)] + if len(frame_idx) > MAX_NUM_FRAMES: + frame_idx = uniform_sample(frame_idx, MAX_NUM_FRAMES) + frames = vr.get_batch(frame_idx).asnumpy() + frames = [Image.fromarray(v.astype('uint8')) for v in frames] + print('num frames:', len(frames)) + return frames + +video_path="video_test.mp4" +frames = encode_video(video_path) +question = "Describe the video" +msgs = [ + {'role': 'user', 'content': frames + [question]}, +] + +# Set decode params for video +params = {} +params["use_image_id"] = False +params["max_slice_nums"] = 2 # use 1 if cuda OOM and video resolution > 448*448 + +answer = model.chat( + image=None, + msgs=msgs, + tokenizer=tokenizer, + **params +) +print(answer) +``` +
+ + +### Inference on Multiple GPUs +You can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs (12 GB or 16 GB) by distributing the model's layers across multiple GPUs. Please refer to this [tutorial](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md) for detailed instructions on how to load the model and inference using multiple low VRAM GPUs. + + +### Inference on Mac +
+Click to view an example, to run MiniCPM-Llama3-V 2.5 on 💻 Mac with MPS (Apple silicon or AMD GPUs). + +```python +# test.py Need more than 16GB memory. import torch from PIL import Image from transformers import AutoModel, AutoTokenizer @@ -589,121 +1500,160 @@ answer, context, _ = model.chat( ) print(answer) ``` -运行: +Run with command: ```shell PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py ```
+### Deployment on Mobile Phone +MiniCPM-V 2.0 can be deployed on mobile phones with Android operating systems. 🚀 Click [MiniCPM-V 2.0](https://github.com/OpenBMB/mlc-MiniCPM) to install apk. -### 手机端部署 -MiniCPM-V 2.0 可运行在Android手机上, 点击[2.0](https://github.com/OpenBMB/mlc-MiniCPM)安装apk使用; MiniCPM-Llama3-V 2.5 将很快推出,敬请期待。 +### Inference with llama.cpp +MiniCPM-V 2.6 can run with llama.cpp now! See [our fork of llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpmv-main/examples/llava/README-minicpmv2.6.md) for more detail. This implementation supports smooth inference of 16~18 token/s on iPad (test environment:iPad Pro + M4). + +### Inference with ollama +MiniCPM-V 2.6 can run with ollama now! See [our fork of ollama](https://github.com/OpenBMB/ollama/blob/minicpm-v2.6/examples/minicpm-v2.6/README.md) for more detail. This implementation supports smooth inference of 16~18 token/s on iPad (test environment:iPad Pro + M4). + +### Inference with vLLM -### 本地WebUI Demo部署
-点击查看本地WebUI demo 在 NVIDIA GPU, Mac等不同设备部署方法 - -```shell -pip install -r requirements.txt -``` - -```shell -# For NVIDIA GPUs, run: -python web_demo_2.5.py --device cuda + vLLM now officially supports MiniCPM-V 2.6, MiniCPM-Llama3-V 2.5 and MiniCPM-V 2.0, Click to see. -# For Mac with MPS (Apple silicon or AMD GPUs), run: -PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps +1. Install vLLM(>=0.5.4): +```shell +pip install vllm ``` +2. Install timm: (optional, MiniCPM-V 2.0 need timm) +```shell +pip install timm==0.9.10 +``` +3. Run the example(for image): +```python +from transformers import AutoTokenizer +from PIL import Image +from vllm import LLM, SamplingParams + +MODEL_NAME = "openbmb/MiniCPM-V-2_6" +# Also available for previous models +# MODEL_NAME = "openbmb/MiniCPM-Llama3-V-2_5" +# MODEL_NAME = "HwwwH/MiniCPM-V-2" + +image = Image.open("xxx.png").convert("RGB") +tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code=True) +llm = LLM( + model=MODEL_NAME, + trust_remote_code=True, + gpu_memory_utilization=1, + max_model_len=2048 +) + +messages = [{ + "role": + "user", + "content": + # Number of images + "(./)" + \ + "\nWhat is the content of this image?" +}] +prompt = tokenizer.apply_chat_template( + messages, + tokenize=False, + add_generation_prompt=True +) + +# Single Inference +inputs = { + "prompt": prompt, + "multi_modal_data": { + "image": image + # Multi images, the number of images should be equal to that of `(./)` + # "image": [image, image] + }, +} +# Batch Inference +# inputs = [{ +# "prompt": prompt, +# "multi_modal_data": { +# "image": image +# }, +# } for _ in 2] + + +# 2.6 +stop_tokens = ['<|im_end|>', '<|endoftext|>'] +stop_token_ids = [tokenizer.convert_tokens_to_ids(i) for i in stop_tokens] +# 2.0 +# stop_token_ids = [tokenizer.eos_id] +# 2.5 +# stop_token_ids = [tokenizer.eos_id, tokenizer.eot_id] + +sampling_params = SamplingParams( + stop_token_ids=stop_token_ids, + use_beam_search=True, + temperature=0, + best_of=3, + max_tokens=1024 +) + +outputs = llm.generate(inputs, sampling_params=sampling_params) + +print(outputs[0].outputs[0].text) +``` +4. click [here](https://modelbest.feishu.cn/wiki/C2BWw4ZP0iCDy7kkCPCcX2BHnOf?from=from_copylink) if you want to use it with *video*, or get more details about `vLLM`.
-### llama.cpp 部署 -MiniCPM-Llama3-V 2.5 现在支持llama.cpp啦! 用法请参考我们的fork [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv), 在手机上可以支持 6~8 token/s 的流畅推理(测试环境:Xiaomi 14 pro + Snapdragon 8 Gen 3)。 +## Fine-tuning -### vLLM 部署 -
-点击查看 vLLM 部署运行的方法 -由于我们对 vLLM 提交的 PR 还在 review 中,因此目前我们 fork 了一个 vLLM 仓库以供测试使用。 +### Simple Fine-tuning -1. 首先克隆我们 fork 的 vLLM 库: -```shell -git clone https://github.com/OpenBMB/vllm.git -``` -2. 安装 vLLM 库: -```shell -cd vllm -pip install -e . -``` -3. 安装 timm 库: -```shell -pip install timm=0.9.10 -``` -4. 测试运行示例程序: -```shell -python examples/minicpmv_example.py -``` +We support simple fine-tuning with Hugging Face for MiniCPM-V 2.0 and MiniCPM-Llama3-V 2.5. + +[Reference Document](./finetune/readme.md) + +### With the SWIFT Framework + +We now support MiniCPM-V series fine-tuning with the SWIFT framework. SWIFT supports training, inference, evaluation and deployment of nearly 200 LLMs and MLLMs . It supports the lightweight training solutions provided by PEFT and a complete Adapters Library including techniques such as NEFTune, LoRA+ and LLaMA-PRO. + +Best Practices:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md), [MiniCPM-V 2.6](https://github.com/modelscope/ms-swift/issues/1613). + +## FAQs +Click here to view the [FAQs](./docs/faqs.md) + +## Model License + +* This repository is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License. + +* The usage of MiniCPM-V model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md). + +* The models and weights of MiniCPM are completely free for academic research. after filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, are also available for free commercial use. + + +## Statement + +As LMMs, MiniCPM-V models (including OmniLMM) generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V models does not represent the views and positions of the model developers + +We will not be liable for any problems arising from the use of MiniCPM-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model. -
+## Institutions + +This project is developed by the following institutions: + +- [THUNLP](https://nlp.csai.tsinghua.edu.cn/) +- [ModelBest](https://modelbest.cn/) +- [Zhihu](https://www.zhihu.com/ ) + +## 🌟 Star History -## 微调 +
+

+ +

+
-### 简易微调 - -我们支持使用 Huggingface Transformers 库简易地微调 MiniCPM-V 2.0 和 MiniCPM-Llama3-V 2.5 模型。 - -[参考文档](./finetune/readme.md) - -### 使用 SWIFT 框架 - -我们支持使用 SWIFT 框架微调 MiniCPM-V 系列模型。SWIFT 支持近 200 种大语言模型和多模态大模型的训练、推理、评测和部署。支持 PEFT 提供的轻量训练方案和完整的 Adapters 库支持的最新训练技术如 NEFTune、LoRA+、LLaMA-PRO 等。 - - -参考文档:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md) - -## 未来计划 - -- [x] 支持 MiniCPM-V 系列模型微调 -- [ ] 实时多模态交互代码开源 - - - - -## 模型协议 - -本仓库中代码依照 Apache-2.0 协议开源 - -本项目中模型权重的使用遵循 “[通用模型许可协议-来源说明-宣传限制-商业授权](https://github.com/OpenBMB/General-Model-License/blob/main/通用模型许可协议-来源说明-宣传限制-商业授权.md)”。 - -本项目中模型权重对学术研究完全开放。 - -如需将模型用于商业用途,请联系 cpm@modelbest.cn 来获取书面授权,登记后可以免费商业使用。 - - -## 声明 - -作为多模态大模型,MiniCPM-V 系列模型(包括 OmniLMM)通过学习大量的多模态数据来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。 - -因此用户在使用本项目的系列模型生成的内容时,应自行负责对其进行评估和验证。如果由于使用本项目的系列开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。 - - -## 机构 - -本项目由以下机构共同开发: - -- [清华大学自然语言处理实验室](https://nlp.csai.tsinghua.edu.cn/) -- [面壁智能](https://modelbest.cn/) -- [知乎](https://www.zhihu.com/ ) - -## 其他多模态项目 - -👏 欢迎了解我们更多的多模态项目: - -[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) - -## 🌟 Star History - - + -## 引用 +## Key Techniques and Other Multimodal Projects -如果您觉得我们模型/代码/论文有帮助,请给我们 ⭐ 和 引用 📝,感谢! +👏 Welcome to explore key techniques of MiniCPM-V and other multimodal projects of our team: + +[VisCPM](https://github.com/OpenBMB/VisCPM/tree/main) | [RLHF-V](https://github.com/RLHF-V/RLHF-V) | [LLaVA-UHD](https://github.com/thunlp/LLaVA-UHD) | [RLAIF-V](https://github.com/RLHF-V/RLAIF-V) + + +## Citation + +If you find our model/code/paper helpful, please consider cite our papers 📝 and star us ⭐️! ```bib -@article{yu2023rlhf, - title={Rlhf-v: Towards trustworthy mllms via behavior alignment from fine-grained correctional human feedback}, - author={Yu, Tianyu and Yao, Yuan and Zhang, Haoye and He, Taiwen and Han, Yifeng and Cui, Ganqu and Hu, Jinyi and Liu, Zhiyuan and Zheng, Hai-Tao and Sun, Maosong and others}, - journal={arXiv preprint arXiv:2312.00849}, - year={2023} -} -@article{viscpm, - title={Large Multilingual Models Pivot Zero-Shot Multimodal Learning across Languages}, - author={Jinyi Hu and Yuan Yao and Chongyi Wang and Shan Wang and Yinxu Pan and Qianyu Chen and Tianyu Yu and Hanghao Wu and Yue Zhao and Haoye Zhang and Xu Han and Yankai Lin and Jiao Xue and Dahai Li and Zhiyuan Liu and Maosong Sun}, - journal={arXiv preprint arXiv:2308.12038}, - year={2023} -} -@article{xu2024llava-uhd, - title={{LLaVA-UHD}: an LMM Perceiving Any Aspect Ratio and High-Resolution Images}, - author={Xu, Ruyi and Yao, Yuan and Guo, Zonghao and Cui, Junbo and Ni, Zanlin and Ge, Chunjiang and Chua, Tat-Seng and Liu, Zhiyuan and Huang, Gao}, - journal={arXiv preprint arXiv:2403.11703}, - year={2024} -} -@article{yu2024rlaifv, - title={RLAIF-V: Aligning MLLMs through Open-Source AI Feedback for Super GPT-4V Trustworthiness}, - author={Yu, Tianyu and Zhang, Haoye and Yao, Yuan and Dang, Yunkai and Chen, Da and Lu, Xiaoman and Cui, Ganqu and He, Taiwen and Liu, Zhiyuan and Chua, Tat-Seng and Sun, Maosong}, - journal={arXiv preprint arXiv:2405.17220}, +@article{yao2024minicpm, + title={MiniCPM-V: A GPT-4V Level MLLM on Your Phone}, + author={Yao, Yuan and Yu, Tianyu and Zhang, Ao and Wang, Chongyi and Cui, Junbo and Zhu, Hongji and Cai, Tianchi and Li, Haoyu and Zhao, Weilin and He, Zhihui and others}, + journal={arXiv preprint arXiv:2408.01800}, year={2024} } ``` diff --git a/finetune/finetune_ds.sh b/finetune/finetune_ds.sh index 4197447..c049471 100644 --- a/finetune/finetune_ds.sh +++ b/finetune/finetune_ds.sh @@ -13,7 +13,7 @@ MODEL="openbmb/MiniCPM-V-2_6" DATA="path/to/trainging_data" EVAL_DATA="path/to/test_data" LLM_TYPE="qwen2" # if use openbmb/MiniCPM-V-2, please set LLM_TYPE=minicpm, if use openbmb/MiniCPM-Llama3-V-2_5, please set LLM_TYPE="llama3" -MODEL_MAX_Length=4096 # if use openbmb/MiniCPM-V-2 or openbmb/MiniCPM-Llama3-V-2_5, please set MODEL_MAX_Length=2048 +MODEL_MAX_Length=2048 # if conduct multi-images sft, please set MODEL_MAX_Length=4096 DISTRIBUTED_ARGS="