
@@ -538,8 +488,10 @@ pip install -r requirements.txt
```python
from chat import OmniLMMChat, img2base64
+import torch
+torch.manual_seed(20)
-chat_model = OmniLMMChat('openbmb/MiniCPM-V-2') # or 'openbmb/OmniLMM-12B'
+chat_model = OmniLMMChat('openbmb/MiniCPM-Llama3-V-2_5')
im_64 = img2base64('./assets/hk_OCR.jpg')
@@ -553,7 +505,7 @@ print(answer)
# Second round chat
# pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": answer})
-msgs.append({"role": "user", "content": "Where is this store in the image?"})
+msgs.append({"role": "user", "content": "请用中文回答"})
inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.chat(inputs)
@@ -563,27 +515,27 @@ print(answer)
可以得到以下输出:
```
-"You should go to the Canon store for a camera."
+"You should go to the Nikon store, as indicated by the neon sign on the right side of the image."
-"The Canon store is located on the right side of the image."
+"你应该去到尼康店,正如指示在图片的右侧。"
```
### Mac 推理
-点击查看 MiniCPM-V 2.0 基于Mac MPS运行 (Apple silicon or AMD GPUs)的示例。
+点击查看 MiniCPM-Llama3-V 2.5 / MiniCPM-V 2.0 基于Mac MPS运行 (Apple silicon 或 AMD GPUs)的示例。
```python
-# test.py
+# test.py Need more than 16GB memory to run.
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
-model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2', trust_remote_code=True, torch_dtype=torch.bfloat16)
-model = model.to(device='mps', dtype=torch.float16)
+model = AutoModel.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True, low_cpu_mem_usage=True)
+model = model.to(device='mps')
-tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2', trust_remote_code=True)
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True)
model.eval()
image = Image.open('./assets/hk_OCR.jpg').convert('RGB')
@@ -607,25 +559,22 @@ PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
### 手机端部署
-MiniCPM-V 2.0 目前可以部署在Android和Harmony操作系统的手机上。 🚀 点击[这里](https://github.com/OpenBMB/mlc-MiniCPM)开始手机端部署。
+MiniCPM-V 2.0 可运行在Android手机上, 点击[2.0](https://github.com/OpenBMB/mlc-MiniCPM)安装apk使用; MiniCPM-Llama3-V 2.5 将很快推出,敬请期待。
### 本地WebUI Demo部署
-点击查看本地WebUI demo在Nvidia GPU, Mac等不同设备部署方法
+点击查看本地WebUI demo 在 NVIDIA GPU, Mac等不同设备部署方法
```shell
pip install -r requirements.txt
```
```shell
-# For Nvidia GPUs support BF16 (like A100, H100, RTX3090), run:
-python web_demo.py --device cuda --dtype bf16
-
-# For Nvidia GPUs do NOT support BF16 (like V100, T4, RTX2080), run:
-python web_demo.py --device cuda --dtype fp16
+# For NVIDIA GPUs, run:
+python web_demo_2.5.py --device cuda
# For Mac with MPS (Apple silicon or AMD GPUs), run:
-PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo.py --device mps --dtype fp16
+PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
```
@@ -658,16 +607,21 @@ python examples/minicpmv_example.py
## 微调
-### MiniCPM-V
+### 简易微调
-我们支持使用 SWIFT 框架微调 MiniCPM-V 系列模型。SWIFT 支持近 200 种 LLM 和 MLLM(多模态大模型)的训练、推理、评测和部署。支持 PEFT 提供的轻量训练方案和完整的 Adapters 库支持的最新训练技术如 NEFTune、LoRA+、LLaMA-PRO 等。
+我们支持使用 Huggingface Transformers 库简易地微调 MiniCPM-V 2.0 和 MiniCPM-Llama3-V 2.5 模型。
-参考文档:[MiniCPM-V](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V-2](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md)
+[参考文档](./finetune/readme.md)
+
+### 使用 SWIFT 框架
+
+我们支持使用 SWIFT 框架微调 MiniCPM-V 系列模型。SWIFT 支持近 200 种大语言模型和多模态大模型的训练、推理、评测和部署。支持 PEFT 提供的轻量训练方案和完整的 Adapters 库支持的最新训练技术如 NEFTune、LoRA+、LLaMA-PRO 等。
+
+参考文档:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md)
## 未来计划
- [x] 支持 MiniCPM-V 系列模型微调
-- [ ] 支持 OmniLMM 系列模型微调
- [ ] 实时多模态交互代码开源
@@ -676,18 +630,18 @@ python examples/minicpmv_example.py
本仓库中代码依照 Apache-2.0 协议开源
-OmniLMM 模型权重的使用遵循 “[通用模型许可协议-来源说明-宣传限制-商业授权](https://github.com/OpenBMB/General-Model-License/blob/main/通用模型许可协议-来源说明-宣传限制-商业授权.md)”。
+本项目中模型权重的使用遵循 “[通用模型许可协议-来源说明-宣传限制-商业授权](https://github.com/OpenBMB/General-Model-License/blob/main/通用模型许可协议-来源说明-宣传限制-商业授权.md)”。
-OmniLMM 模型权重对学术研究完全开放。
+本项目中模型权重对学术研究完全开放。
如需将模型用于商业用途,请联系 cpm@modelbest.cn 来获取书面授权,登记后可以免费商业使用。
## 声明
-作为多模态大模型,MiniCPM-V 和 OmniLMM 通过学习大量的多模态数据来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
+作为多模态大模型,MiniCPM-V 系列模型(包括 OmniLMM)通过学习大量的多模态数据来生成内容,但它无法理解、表达个人观点或价值判断,它所输出的任何内容都不代表模型开发者的观点和立场。
-因此用户在使用 MiniCPM-V 和 OmniLMM 生成的内容时,应自行负责对其进行评估和验证。如果由于使用 OmniLMM 开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
+因此用户在使用本项目的系列模型生成的内容时,应自行负责对其进行评估和验证。如果由于使用本项目的系列开源模型而导致的任何问题,包括但不限于数据安全问题、公共舆论风险,或模型被误导、滥用、传播或不当利用所带来的任何风险和问题,我们将不承担任何责任。
## 机构
diff --git a/README_en.md b/README_en.md
index 4e71603..164c0bc 100644
--- a/README_en.md
+++ b/README_en.md
@@ -1,30 +1,31 @@
-

+

-**Large multi-modal models for strong performance and efficient deployment**
+**A GPT-4V Level Multimodal LLM on Your Phone**
[中文](./README.md) |
English
+ MiniCPM-Llama3-V 2.5 🤗 🤖 |
MiniCPM-V 2.0 🤗 🤖 |
- OmniLMM-12B 🤗 🤖 | Technical Blog
+ Technical Blog
-**MiniCPM-V** and **OmniLMM** are a family of open-source large multimodal models (LMMs) adept at vision & language modeling. The models process images and text inputs and deliver high-quality text outputs. We release two featured versions that are targeted at **strong performance and efficient deployment**:
+**MiniCPM-V** is a series of end-side multimodal LLMs designed for image-text understanding. These models accept image and text inputs and provide high-quality text outputs. Since February 2024, we have released four versions of the model, aiming to achieve **strong performance and efficient deployment**. The most noteworthy models in this series currently include:
-- **MiniCPM-V 2.8B**: State-of-the-art end-side large multimodal models. Our latest MiniCPM-V 2.0 can accept 1.8 million pixels (e.g., 1344x1344) images at any aspect ratio, and is adept at OCR capability. It achieves comparable performance with Gemini Pro in understanding scene-text and matches GPT-4V in preventing hallucinations.
-
-- **OmniLMM 12B**: The most capable version with leading performance among comparable-sized models on multiple benchmarks. The model also achieves state-of-the-art performance in trustworthy behaviors, with even less hallucination than GPT-4V.
+- **MiniCPM-Llama3-V 2.5**: 🔥🔥🔥 The latest and most capable model in the MiniCPM-V series. With a total of 8B parameters, the model surpasses proprietary models such as GPT-4V-1106, Gemini Pro, Qwen-VL-Max and Claude 3 in overall performance. Its OCR capability and instruction-following capability have been further enhanced. The model supports multimodal interaction in over 30 languages including English, Chinese, French, Spanish, German etc. Equipped with model quantization and efficient inference technologies on CPUs, NPUs and compilation optimizations, MiniCPM-Llama3-V 2.5 can be efficiently deployed on edge devices.
+- **MiniCPM-V 2.0**: The lightest model in the MiniCPM-V series. With 2B parameters, it surpasses larger-scale models such as Yi-VL 34B, CogVLM-Chat 17B, and Qwen-VL-Chat 10B in overall performance. It accepts image inputs of any aspect ratio up to 1.8 million pixels (e.g., 1344x1344), achieving comparable performance with Gemini Pro in understanding scene-text and matches GPT-4V in preventing hallucinations.
## News
+* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first edge-side multimodal LLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md), try it now!
* [2024.04.23] MiniCPM-V-2.0 supports vLLM now! Click [here](#vllm) to view more details.
* [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at [here](https://huggingface.co/spaces/openbmb/MiniCPM-V-2)!
* [2024.04.17] MiniCPM-V-2.0 supports deploying [WebUI Demo](#webui-demo) now!
@@ -38,8 +39,8 @@
## Contents
-- [MiniCPM-V 2.8B](#minicpm-v-28b)
-- [OmniLMM-12B](#omnilmm-12b)
+- [MiniCPM-Llama3-V 2.5](#minicpm-llama3-v-25)
+- [MiniCPM-V 2.0](#minicpm-v-20)
- [Online Demo](#online-demo)
- [Install](#install)
- [Inference](#inference)
@@ -48,13 +49,334 @@
- [Inference on Mac](#inference-on-mac)
- [Deployment on Mobile Phone](#deployment-on-mobile-phone)
- [WebUI Demo](#webui-demo)
-- [Finetune](#finetune)
+ - [Inference with vLLM](#inference-with-vllm)
+- [Fine-tuning](#fine-tuning)
- [TODO](#todo)
- [Citation](#citation)
+## MiniCPM-Llama3-V 2.5
-## MiniCPM-V 2.8B
-**MiniCPM-V 2.8B** is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Our latest version, MiniCPM-V 2.0 has several notable features.
+**MiniCPM-Llama3-V 2.5** is the latest model in the MiniCPM-V series. The model is built on SigLip-400M and Llama3-8B-Instruct with a total of 8B parameters. It exhibits a significant performance improvement over MiniCPM-V 2.0. Notable features of MiniCPM-Llama3-V 2.5 include:
+
+- 🔥 **Leading Performance.**
+ MiniCPM-Llama3-V 2.5 has achieved an average score of 65.1 on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. **It surpasses widely used proprietary models like GPT-4V-1106, Gemini Pro, Claude 3 and Qwen-VL-Max with 8B parameters**, greatly outperforming other multimodal LLMs built on Llama 3.
+
+- 💪 **Strong OCR Capabilities.**
+ MiniCPM-Llama3-V 2.5 can process images with any aspect ratio up to 1.8 million pixels, achieving an **700+ score on OCRBench, surpassing proprietary models such as GPT-4o, GPT-4V-0409, Qwen-VL-Max and Gemini Pro**. Based on recent user feedback, MiniCPM-Llama3-V 2.5 has now enhanced full-text OCR extraction, table-to-markdown conversion, and other high-utility capabilities, and has further strengthened its instruction-following and complex reasoning abilities, enhancing multimodal interaction experiences.
+
+- 🏆 **Trustworthy Behavior.**
+ Leveraging the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) method (the newest technology in the [RLHF-V](https://github.com/RLHF-V) [CVPR'24] series), MiniCPM-Llama3-V 2.5 exhibits trustworthy multimodal behavior. It achieves **10.3%** hallucination rate on Object HalBench, lower than GPT-4V-1106 (13.6%), achieving the best level within the open-source community.
+
+- 🌏 **Multilingual Support.**
+ Thanks to the strong multilingual capabilities of Llama 3 and the cross-lingual generalization technique from [VisCPM](https://github.com/OpenBMB/VisCPM), MiniCPM-Llama3-V 2.5 extends its foundational bilingual (Chinese-English) multimodal capabilities to support **30+ languages including German, French, Spanish, Italian, Russian etc.** We achieve this extension through only minimal instruction-tuning with translated multimodal data. [All Supported Languages](./assets/minicpm-llama-v-2-5_languages.md).
+
+- 🚀 **Efficient Deployment.**
+ MiniCPM-Llama3-V 2.5 systematically employs **model quantization, CPU optimizations, NPU optimizations and compilation optimizations** as acceleration techniques, achieving high-efficiency deployment on edge devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a **150-fold acceleration in multimodal large model edge-side image encoding** and a **3-fold increase in language decoding speed**.
+
+### Evaluation
+
+
+

+
+
+Click to view results on TextVQA, DocVQA, OCRBench, OpenCompass, MME, MMBench, MMMU, MathVista, LLaVA Bench, RealWorld QA, Object HalBench.
+
+
+
+
+
+ | Model |
+ Size |
+ OCRBench |
+ TextVQA val |
+ DocVQA test |
+ Open-Compass |
+ MME |
+ MMB test (en) |
+ MMB test (cn) |
+ MMMU val |
+ Math-Vista |
+ LLaVA Bench |
+ RealWorld QA |
+ Object HalBench |
+
+
+
+
+ | Proprietary |
+
+
+ | Gemini Pro |
+ - |
+ 680 |
+ 74.6 |
+ 88.1 |
+ 62.9 |
+ 2148.9 |
+ 73.6 |
+ 74.3 |
+ 48.9 |
+ 45.8 |
+ 79.9 |
+ 60.4 |
+ - |
+
+
+ | GPT-4V (2023.11.06) |
+ - |
+ 645 |
+ 78.0 |
+ 88.4 |
+ 63.5 |
+ 1771.5 |
+ 77.0 |
+ 74.4 |
+ 53.8 |
+ 47.8 |
+ 93.1 |
+ 63.0 |
+ 86.4 |
+
+
+ | Open-source |
+
+
+ | Mini-Gemini |
+ 2.2B |
+ - |
+ 56.2 |
+ 34.2* |
+ - |
+ 1653.0 |
+ - |
+ - |
+ 31.7 |
+ - |
+ - |
+ - |
+ - |
+
+
+ | Qwen-VL-Chat |
+ 9.6B |
+ 488 |
+ 61.5 |
+ 62.6 |
+ 51.6 |
+ 1860.0 |
+ 61.8 |
+ 56.3 |
+ 37.0 |
+ 33.8 |
+ 67.7 |
+ 49.3 |
+ 56.2 |
+
+
+ | DeepSeek-VL-7B |
+ 7.3B |
+ 435 |
+ 64.7* |
+ 47.0* |
+ 54.6 |
+ 1765.4 |
+ 73.8 |
+ 71.4 |
+ 38.3 |
+ 36.8 |
+ 77.8 |
+ 54.2 |
+ |
+
+
+ | Yi-VL-34B |
+ 34B |
+ 290 |
+ 43.4* |
+ 16.9* |
+ 52.2 |
+ 2050.2 |
+ 72.4 |
+ 70.7 |
+ 45.1 |
+ 30.7 |
+ 62.3 |
+ 54.8 |
+ 79.3 |
+
+
+ | CogVLM-Chat |
+ 17.4B |
+ 590 |
+ 70.4 |
+ 33.3* |
+ 54.2 |
+ 1736.6 |
+ 65.8 |
+ 55.9 |
+ 37.3 |
+ 34.7 |
+ 73.9 |
+ 60.3 |
+ 73.6 |
+
+
+ | TextMonkey |
+ 9.7B |
+ 558 |
+ 64.3 |
+ 66.7 |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+ - |
+
+
+ | IDEFICS2-8B |
+ 8.0B |
+ - |
+ 73.0 |
+ 74.0 |
+ 57.2 |
+ 1847.6 |
+ 75.7 |
+ 68.6 |
+ 45.2 |
+ 52.2 |
+ 49.1 |
+ 60.7 |
+ - |
+
+
+ | Bunny-LLama-3-8B |
+ 8.4B |
+ - |
+ - |
+ - |
+ 54.3 |
+ 1920.3 |
+ 77.0 |
+ 73.9 |
+ 41.3 |
+ 31.5 |
+ 61.2 |
+ 58.8 |
+ - |
+
+
+ | LLaVA-NeXT Llama-3-8B |
+ 8.4B |
+ - |
+ - |
+ 78.2 |
+ - |
+ 1971.5 |
+ - |
+ - |
+ 41.7 |
+ 37.5 |
+ 80.1 |
+ 60.0 |
+ - |
+
+
+ | MiniCPM-V 1.0 |
+ 2.8B |
+ 366 |
+ 60.6 |
+ 38.2 |
+ 47.5 |
+ 1650.2 |
+ 64.1 |
+ 62.6 |
+ 38.3 |
+ 28.9 |
+ 51.3 |
+ 51.2 |
+ 78.4 |
+
+
+ | MiniCPM-V 2.0 |
+ 2.8B |
+ 605 |
+ 74.1 |
+ 71.9 |
+ 54.5 |
+ 1808.6 |
+ 69.1 |
+ 66.5 |
+ 38.2 |
+ 38.7 |
+ 69.2 |
+ 55.8 |
+ 85.5 |
+
+
+ | MiniCPM-Llama3-V 2.5 |
+ 8.5B |
+ 725 |
+ 76.6 |
+ 84.8 |
+ 65.1 |
+ 2024.6 |
+ 77.2 |
+ 74.2 |
+ 45.8 |
+ 54.3 |
+ 86.7 |
+ 63.5 |
+ 89.7 |
+
+
+
+
+
+
+* We evaluate the officially released checkpoint by ourselves.
+
+
+
+
+

+
+ Evaluation results of LLaVABench in multiple languages
+
+
+### Examples
+
+
+
+
+
+
+
+We deploy MiniCPM-Llama3-V 2.5 on end devices. The demo video is the raw screen recording on a Xiaomi 14 Pro at double speed.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+## MiniCPM-V 2.0
+
+
+Click to view more details of MiniCPM-V 2.0
+
+
+**MiniCPM-V 2.0** is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Our latest version, MiniCPM-V 2.0 has several notable features.
- 🔥 **State-of-the-art Performance.**
@@ -72,252 +394,10 @@
MiniCPM-V 2.0 can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. For visual encoding, we compress the image representations into much fewer tokens via a perceiver resampler. This allows MiniCPM-V 2.0 to operate with **favorable memory cost and speed during inference even when dealing with high-resolution images**.
-
-
- 🙌 **Bilingual Support.**
MiniCPM-V 2.0 **supports strong bilingual multimodal capabilities in both English and Chinese**. This is enabled by generalizing multimodal capabilities across languages, a technique from [VisCPM](https://arxiv.org/abs/2308.12038) [ICLR'24].
-### Evaluation
-
-
-

-
-
-Click to view results on TextVQA, DocVQA, OCRBench, OpenCompass, MME, MMBench, MMMU, MathVista, LLaVA Bench, Object HalBench.
-
-
-
-
-
- | Model |
- Size |
- TextVQA val |
- DocVQA test |
- OCRBench |
- OpenCompass |
- MME |
- MMB dev(en) |
- MMB dev(zh) |
- MMMU val |
- MathVista |
- LLaVA Bench |
- Object HalBench |
-
-
-
-
- | Proprietary models |
-
-
- | Gemini Pro Vision |
- - |
- 74.6 |
- 88.1 |
- 680 |
- 63.8 |
- 2148.9 |
- 75.2 |
- 74.0 |
- 48.9 |
- 45.8 |
- 79.9 |
- - |
-
-
- | GPT-4V |
- - |
- 78.0 |
- 88.4 |
- 645 |
- 63.2 |
- 1771.5 |
- 75.1 |
- 75.0 |
- 53.8 |
- 47.8 |
- 93.1 |
- 86.4 / 92.7 |
-
-
- | Open-source models 6B~34B |
-
-
- | Yi-VL-6B |
- 6.7B |
- 45.5* |
- 17.1* |
- 290 |
- 49.3 |
- 1915.1 |
- 68.6 |
- 68.3 |
- 40.3 |
- 28.8 |
- 51.9 |
- - |
-
-
- | Qwen-VL-Chat |
- 9.6B |
- 61.5 |
- 62.6 |
- 488 |
- 52.1 |
- 1860.0 |
- 60.6 |
- 56.7 |
- 37.0 |
- 33.8 |
- 67.7 |
- 56.2 / 80.0 |
-
-
- | Yi-VL-34B |
- 34B |
- 43.4* |
- 16.9* |
- 290 |
- 52.6 |
- 2050.2 |
- 71.1 |
- 71.4 |
- 45.1 |
- 30.7 |
- 62.3 |
- - |
-
-
- | DeepSeek-VL-7B |
- 7.3B |
- 64.7* |
- 47.0* |
- 435 |
- 55.6 |
- 1765.4 |
- 74.1 |
- 72.8 |
- 38.3 |
- 36.8 |
- 77.8 |
- - |
-
-
- | TextMonkey |
- 9.7B |
- 64.3 |
- 66.7 |
- 558 |
- - |
- - |
- - |
- - |
- - |
- - |
- - |
- - |
-
-
- | CogVLM-Chat |
- 17.4B |
- 70.4 |
- 33.3* |
- 590 |
- 52.5 |
- 1736.6 |
- 63.7 |
- 53.8 |
- 37.3 |
- 34.7 |
- 73.9 |
- 73.6 / 87.4 |
-
-
- | Open-source models 1B~3B |
-
-
- | DeepSeek-VL-1.3B |
- 1.7B |
- 58.4* |
- 37.9* |
- 413 |
- 46.0 |
- 1531.6 |
- 64.0 |
- 61.2 |
- 33.8 |
- 29.4 |
- 51.1 |
- - |
-
-
- | MobileVLM V2 |
- 3.1B |
- 57.5 |
- 19.4* |
- - |
- - |
- 1440.5(P) |
- 63.2 |
- - |
- - |
- - |
- - |
- - |
-
-
- | Mini-Gemini |
- 2.2B |
- 56.2 |
- 34.2* |
- - |
- - |
- 1653.0 |
- 59.8 |
- - |
- 31.7 |
- - |
- - |
- - |
-
-
- | MiniCPM-V |
- 2.8B |
- 60.6 |
- 38.2 |
- 366 |
- 47.6 |
- 1650.2 |
- 67.9 |
- 65.3 |
- 38.3 |
- 28.9 |
- 51.3 |
- 78.4 / 88.5 |
-
-
- | MiniCPM-V 2.0 |
- 2.8B |
- 74.1 |
- 71.9 |
- 605 |
- 55.0 |
- 1808.6 |
- 69.6 |
- 68.1 |
- 38.2 |
- 38.7 |
- 69.2 |
- 85.5 / 92.2 |
-
-
-
-
-
-* We evaluate the officially released checkpoint by ourselves.
-
-
-
### Examples
@@ -335,157 +415,19 @@ We deploy MiniCPM-V 2.0 on end devices. The demo video is the raw screen recordi
-### MiniCPM-V 1.0
-Please see the info about MiniCPM-V 1.0 [here](./minicpm_v1.md).
-
-
-## OmniLMM-12B
-**OmniLMM-12B** is the most capable version. The model is built based on EVA02-5B and Zephyr-7B-β, connected with a perceiver resampler layer, and trained on multimodal data in a curriculum fashion. The model has three notable features:
-
-- 🔥 **Strong Performance.**
-
- OmniLMM-12B achieves **leading performance** among models with comparable sizes, surpassing established LMMs on multiple benchmarks (including MME, MMBench, SEED-Bench, etc). The model also endows rich multi-modal world knowledge.
-
-- 🏆 **Trustworthy Behavior.**
-
- LMMs are known for suffering from hallucination, often generating text that is not factually grounded in images (e.g., faithfully describing non-existing objects in images). OmniLMM-12B is **the first state-of-the-art open-source LMM aligned via multimodal RLHF for trustworthy behavior** (using the recent [RLHF-V](https://rlhf-v.github.io/) technique). It **ranks #1** among open-source models on [MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench), and **outperforms GPT-4V** on [Object HalBench](https://arxiv.org/abs/2312.00849).
-
-- 🕹 **Real-time Multimodal Interaction.**
-
- We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can **replicate some of the fun cases shown in the Gemini Demo video, without any video edition**.
-
-
-### Evaluation
-
-

-
-
-Click to view results on MME, MMBench, MMMU, MMBench, MMHal-Bench, Object HalBench, SeedBench, LLaVA Bench, MathVista.
-
-
-
-
- | Model |
- Size |
- MME |
- MMB dev (en) |
- MMMU val |
- MMHal-Bench |
- Object HalBench |
- SeedBench-I |
- MathVista |
- LLaVA Bench |
-
-
-
-
- | GPT-4V† |
- - |
- 1771.5 |
- 75.1 |
- 56.8 |
- 3.53 / 70.8 |
- 86.4 / 92.7 |
- 71.6 |
- 47.8 |
- 93.1 |
-
-
- | Qwen-VL-Plus† |
- - |
- 2183.4 |
- 66.2 |
- 45.2 |
- - |
- - |
- 65.7 |
- 36.0 |
- 73.7 |
-
-
- | Yi-VL 6B |
- 6.7B |
- 1915.1 |
- 68.6 |
- 40.3 |
- - |
- - |
- 67.5 |
- 28.8 |
- 51.9 |
-
-
- | Qwen-VL-Chat |
- 9.6B |
- 1860.0 |
- 60.6 |
- 35.9 |
- 2.93 / 59.4 |
- 56.2 / 80.0 |
- 64.8 |
- 33.8 |
- 67.7 |
-
-
- | CogVLM-Chat |
- 17.4B |
- 1736.6 |
- 63.7 |
- 32.1 |
- 2.68 / 52.1 |
- 73.6 / 87.4 |
- 68.8 |
- 34.7 |
- 73.9 |
-
-
- | LLaVA 1.5 |
- 13.6B |
- 1808.4 |
- 68.2 |
- 36.4 |
- 2.71 / 51.0 |
- 53.7 / 77.4 |
- 68.1 |
- 26.4 |
- 64.6 |
-
-
- | OmniLMM-12B |
- 11.6B |
- 1935.8 |
- 71.6 |
- 40.7 |
- 3.45 / 68.8 |
- 90.3 / 95.5 |
- 71.1 |
- 34.9 |
- 72.0 |
-
-
-
-†: Proprietary models
-
-### Examples
+## Legacy Models
-
-
-
-
-
+| Model | Introduction and Guidance |
+|:----------------------|:-------------------:|
+| MiniCPM-V 1.0 | [Document](./minicpm_v1.md) |
+| OmniLMM-12B | [Document](./omnilmm_en.md) |
-We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. Video frames are described in text using OmniLMM-12B, and ChatGPT 3.5 (text-only) is employed to generate response according to the descriptions and user prompts. The demo video is a raw recording without edition.
-
-
-
-
## Online Demo
-Click here to try out the Demo of [MiniCPM-V 2.0](http://120.92.209.146:80/) and [OmniLMM-12B](http://120.92.209.146:8081).
+Click here to try out the Demo of [MiniCPM-Llama3-V 2.5](http://120.92.209.146:8889/) | [MiniCPM-V 2.0](http://120.92.209.146:80).
## Install
@@ -514,9 +456,10 @@ pip install -r requirements.txt
### Model Zoo
| Model | Description | Download Link |
|:----------------------|:-------------------|:---------------:|
-| MiniCPM-V 2.0 | The latest version for state-of-the-art end-side capabilities with high efficiency. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2/files) |
-| MiniCPM-V | The first version of MiniCPM-V. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V/files) |
-| OmniLMM-12B | The most capable version with leading performance. | [🤗](https://huggingface.co/openbmb/OmniLMM-12B) [
](https://modelscope.cn/models/OpenBMB/OmniLMM-12B/files) |
+| MiniCPM-Llama3-V 2.5 | The lastest version, achieving state-of-the edge-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5/files) |
+| MiniCPM-Llama3-V 2.5 int4 | int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4/files) |
+| MiniCPM-V 2.0 | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2/files) |
+| MiniCPM-V 1.0 | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V/files) |
### Multi-turn Conversation
Please refer to the following codes to run `MiniCPM-V` and `OmniLMM`.
@@ -529,9 +472,9 @@ Please refer to the following codes to run `MiniCPM-V` and `OmniLMM`.
```python
import torch
from chat import OmniLMMChat, img2base64
-torch.manual_seed(0)
+torch.manual_seed(20)
-chat_model = OmniLMMChat('openbmb/MiniCPM-V-2') # or 'openbmb/OmniLMM-12B'
+chat_model = OmniLMMChat('openbmb/MiniCPM-Llama3-V-2_5')
im_64 = img2base64('./assets/hk_OCR.jpg')
@@ -545,7 +488,7 @@ print(answer)
# Second round chat
# pass history context of multi-turn conversation
msgs.append({"role": "assistant", "content": answer})
-msgs.append({"role": "user", "content": "Where is this store in the image?"})
+msgs.append({"role": "user", "content": "请用中文回答"})
inputs = {"image": im_64, "question": json.dumps(msgs)}
answer = chat_model.chat(inputs)
@@ -555,27 +498,27 @@ print(answer)
We can obtain the following results:
```
-"You should go to the Canon store for a camera."
+"You should go to the Nikon store, as indicated by the neon sign on the right side of the image."
-"The Canon store is located on the right side of the image."
+"你应该去到尼康店,正如指示在图片的右侧。"
```
### Inference on Mac
-Click to view an example, to run MiniCPM-V 2.0 on 💻 Mac with MPS (Apple silicon or AMD GPUs).
+Click to view an example, to run MiniCPM-Llama3-V 2.5 on 💻 Mac with MPS (Apple silicon or AMD GPUs).
```python
-# test.py
+# test.py Need more than 16GB memory.
import torch
from PIL import Image
from transformers import AutoModel, AutoTokenizer
-model = AutoModel.from_pretrained('openbmb/MiniCPM-V-2', trust_remote_code=True, torch_dtype=torch.bfloat16)
-model = model.to(device='mps', dtype=torch.float16)
+model = AutoModel.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True, low_cpu_mem_usage=True)
+model = model.to(device='mps')
-tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-V-2', trust_remote_code=True)
+tokenizer = AutoTokenizer.from_pretrained('openbmb/MiniCPM-Llama3-V-2_5', trust_remote_code=True)
model.eval()
image = Image.open('./assets/hk_OCR.jpg').convert('RGB')
@@ -598,7 +541,7 @@ PYTORCH_ENABLE_MPS_FALLBACK=1 python test.py
### Deployment on Mobile Phone
-Currently MiniCPM-V 2.0 can be deployed on mobile phones with Android and Harmony operating systems. 🚀 Try it out [here](https://github.com/OpenBMB/mlc-MiniCPM).
+MiniCPM-V 2.0 can be deployed on mobile phones with Android operating systems. 🚀 Click [here](https://github.com/OpenBMB/mlc-MiniCPM) to install apk. MiniCPM-Llama3-V 2.5 coming soon.
### WebUI Demo
@@ -610,14 +553,11 @@ pip install -r requirements.txt
```
```shell
-# For Nvidia GPUs support BF16 (like A100, H100, RTX3090), run:
-python web_demo.py --device cuda --dtype bf16
-
-# For Nvidia GPUs do NOT support BF16 (like V100, T4, RTX2080), run:
-python web_demo.py --device cuda --dtype fp16
+# For NVIDIA GPUs, run:
+python web_demo_2.5.py --device cuda
# For Mac with MPS (Apple silicon or AMD GPUs), run:
-PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo.py --device mps --dtype fp16
+PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
```
@@ -646,20 +586,25 @@ python examples/minicpmv_example.py
```
-## Finetune
+## Fine-tuning
-### MiniCPM-V
+### Simple Fine-tuning
-We now support finetune MiniCPM-V series with the SWIFT framework. SWIFT supports training, inference, evaluation and deployment of nearly 200 LLMs and MLLMs (multimodal large models). It supports the lightweight training solutions provided by PEFT and a complete Adapters Library including techniques such as NEFTune, LoRA+ and LLaMA-PRO.
+We supports simple fine-tuning with Hugging Face for MiniCPM-V 2.0 and MiniCPM-Llama3-V 2.5.
-Best Practices:[MiniCPM-V](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V-2](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md)
+[Reference Document](./finetune/readme.md)
+
+### With the SWIFT Framework
+
+We now support finetune MiniCPM-V series with the SWIFT framework. SWIFT supports training, inference, evaluation and deployment of nearly 200 LLMs and MLLMs . It supports the lightweight training solutions provided by PEFT and a complete Adapters Library including techniques such as NEFTune, LoRA+ and LLaMA-PRO.
+
+Best Practices:[MiniCPM-V 1.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md), [MiniCPM-V 2.0](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md)
## TODO
- [x] MiniCPM-V fine-tuning support
-- [ ] OmniLMM fine-tuning support
- [ ] Code release for real-time interactive assistant
## Model License
diff --git a/assets/MiniCPM-Llama3-V-2.5-peformance.png b/assets/MiniCPM-Llama3-V-2.5-peformance.png
new file mode 100644
index 0000000..e8ef3b3
Binary files /dev/null and b/assets/MiniCPM-Llama3-V-2.5-peformance.png differ
diff --git a/assets/gif_cases/1-4.gif b/assets/gif_cases/1-4.gif
new file mode 100644
index 0000000..72aa0fc
Binary files /dev/null and b/assets/gif_cases/1-4.gif differ
diff --git a/assets/gif_cases/meal_plan.gif b/assets/gif_cases/meal_plan.gif
new file mode 100644
index 0000000..64c91e5
Binary files /dev/null and b/assets/gif_cases/meal_plan.gif differ
diff --git a/assets/gif_cases/ticket.gif b/assets/gif_cases/ticket.gif
new file mode 100644
index 0000000..b09d831
Binary files /dev/null and b/assets/gif_cases/ticket.gif differ
diff --git a/assets/llavabench_compare.png b/assets/llavabench_compare.png
new file mode 100644
index 0000000..d6a7151
Binary files /dev/null and b/assets/llavabench_compare.png differ
diff --git a/assets/minicpm-llama-v-2-5_languages.md b/assets/minicpm-llama-v-2-5_languages.md
new file mode 100644
index 0000000..0eae344
--- /dev/null
+++ b/assets/minicpm-llama-v-2-5_languages.md
@@ -0,0 +1,176 @@
+- English
+- 中文
+- 한국어
+- 日本語
+- Deutsch
+- Français
+- Português
+- Español
+- မြန်မာဘာသာ
+- ไทย
+- Tiếng Việt
+- Türkçe
+- ܣܘܪܝܝܐ
+- العربية
+- हिन्दी
+- বাংলা
+- नेपाली
+- Türkmençe
+- Тоҷикӣ
+- Кыргызча
+- Русский
+- Українська
+- Беларуская
+- ქართული
+- Azərbaycanca
+- Հայերեն
+- Polski
+- Lietuvių
+- Eesti
+- Latviešu
+- Čeština
+- Slovenčina
+- Magyar
+- Slovenščina
+- Hrvatski
+- Bosanski
+- Crnogorski
+- Српски
+- Shqip
+- Română
+- Български
+- Македонски
+
+
+## 支持语言
+
+英语
+
+中文
+
+韩语
+
+日语
+
+德语
+
+法语
+
+葡萄牙语
+
+西班牙语
+
+缅甸语
+
+泰语
+
+越南语
+
+土耳其语
+
+叙利亚语
+
+阿拉伯语
+
+印地语
+
+孟加拉语
+
+尼泊尔语
+
+土库曼语
+
+塔吉克语
+
+吉尔吉斯语
+
+俄语
+
+乌克兰语
+
+白俄罗斯语
+
+格鲁吉亚语
+
+阿塞拜疆语
+
+亚美尼亚语
+
+波兰语
+
+立陶宛语
+
+爱沙尼亚语
+
+拉脱维亚语
+
+捷克语
+
+斯洛伐克语
+
+匈牙利语
+
+斯洛文尼亚语
+
+克罗地亚语
+
+波斯尼亚语
+
+黑山语
+
+塞尔维亚语
+
+阿尔巴尼亚语
+
+罗马尼亚语
+
+保加利亚
+
+马其顿语
+
+
+
+## Supported Languages
+
+English
+Chinese
+Korean
+Japanese
+German
+French
+Portuguese
+Spanish
+Burmese
+Thai
+Vietnamese
+Turkish
+Syriac
+Arabic
+Hindi
+Bengali
+Nepali
+Turkmen
+Tajik
+Kyrgyz
+Russian
+Ukrainian
+Belarusian
+Georgian
+Azerbaijani
+Armenian
+Polish
+Lithuanian
+Estonian
+Latvian
+Czech
+Slovak
+Hungarian
+Slovenian
+Croatian
+Bosnian
+Montenegrin
+Serbian
+Albanian
+Romanian
+Bulgarian
+Macedonian
\ No newline at end of file
diff --git a/assets/minicpmv-2-peformance.png b/assets/minicpmv-2-peformance.png
index 12c85bc..1ff226e 100644
Binary files a/assets/minicpmv-2-peformance.png and b/assets/minicpmv-2-peformance.png differ
diff --git a/assets/minicpmv-llama3-v2.5/case_OCR_en.png b/assets/minicpmv-llama3-v2.5/case_OCR_en.png
new file mode 100644
index 0000000..2f228b5
Binary files /dev/null and b/assets/minicpmv-llama3-v2.5/case_OCR_en.png differ
diff --git a/assets/minicpmv-llama3-v2.5/case_complex_reasoning.png b/assets/minicpmv-llama3-v2.5/case_complex_reasoning.png
new file mode 100644
index 0000000..c3ccc35
Binary files /dev/null and b/assets/minicpmv-llama3-v2.5/case_complex_reasoning.png differ
diff --git a/assets/minicpmv-llama3-v2.5/case_information_extraction.png b/assets/minicpmv-llama3-v2.5/case_information_extraction.png
new file mode 100644
index 0000000..e10e8bf
Binary files /dev/null and b/assets/minicpmv-llama3-v2.5/case_information_extraction.png differ
diff --git a/assets/minicpmv-llama3-v2.5/case_long_img.png b/assets/minicpmv-llama3-v2.5/case_long_img.png
new file mode 100644
index 0000000..14be420
Binary files /dev/null and b/assets/minicpmv-llama3-v2.5/case_long_img.png differ
diff --git a/assets/minicpmv-llama3-v2.5/case_markdown.png b/assets/minicpmv-llama3-v2.5/case_markdown.png
new file mode 100644
index 0000000..95d0d19
Binary files /dev/null and b/assets/minicpmv-llama3-v2.5/case_markdown.png differ
diff --git a/assets/minicpmv-llama3-v2.5/cases_all.png b/assets/minicpmv-llama3-v2.5/cases_all.png
new file mode 100644
index 0000000..c1794c7
Binary files /dev/null and b/assets/minicpmv-llama3-v2.5/cases_all.png differ
diff --git a/assets/minicpmv-llama3-v2.5/temp b/assets/minicpmv-llama3-v2.5/temp
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/assets/minicpmv-llama3-v2.5/temp
@@ -0,0 +1 @@
+
diff --git a/assets/minicpmv.png b/assets/minicpmv.png
new file mode 100644
index 0000000..3f4cedb
Binary files /dev/null and b/assets/minicpmv.png differ
diff --git a/chat.py b/chat.py
index e802887..77ba8f7 100644
--- a/chat.py
+++ b/chat.py
@@ -160,11 +160,36 @@ class OmniLMM3B:
)
return answer
+class MiniCPMV2_5:
+ def __init__(self, model_path) -> None:
+ self.model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(dtype=torch.float16)
+ self.tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
+ self.model.eval().cuda()
+
+ def chat(self, input):
+ try:
+ image = Image.open(io.BytesIO(base64.b64decode(input['image']))).convert('RGB')
+ except Exception as e:
+ return "Image decode error"
+
+ msgs = json.loads(input['question'])
+
+ answer = self.model.chat(
+ image=image,
+ msgs=msgs,
+ tokenizer=self.tokenizer,
+ sampling=True,
+ temperature=0.7
+ )
+ return answer
+
class OmniLMMChat:
def __init__(self, model_path) -> None:
if '12B' in model_path:
self.model = OmniLMM12B(model_path)
+ elif 'MiniCPM-Llama3-V' in model_path:
+ self.model = MiniCPMV2_5(model_path)
else:
self.model = OmniLMM3B(model_path)
diff --git a/finetune/dataset.py b/finetune/dataset.py
index b93035a..ca1a383 100644
--- a/finetune/dataset.py
+++ b/finetune/dataset.py
@@ -1,90 +1,115 @@
-import os
-import math
-import json
import copy
+import json
import logging
+import math
+import os
+from dataclasses import dataclass, field
+from typing import Dict, List, Optional
import numpy as np
import torch
-from torch.nn.utils.rnn import pad_sequence
-from typing import Dict, Optional, List
from PIL import Image
-
-
-from dataclasses import dataclass, field
-from transformers import AutoTokenizer, AutoProcessor
+from torch.nn.utils.rnn import pad_sequence
from torch.utils.data import Dataset
+from transformers import AutoProcessor, AutoTokenizer
class SupervisedDataset(Dataset):
"""Dataset for supervised fine-tuning."""
- def __init__(self, raw_data, transform, tokenizer, slice_config):
+
+ def __init__(
+ self,
+ raw_data,
+ transform,
+ tokenizer,
+ slice_config,
+ llm_type="minicpm",
+ patch_size=14,
+ query_nums=64,
+ batch_vision=False,
+ ):
super(SupervisedDataset, self).__init__()
self.raw_data = raw_data
self.tokenizer = tokenizer
self.transform = transform
self.slice_config = slice_config
+ self.llm_type = llm_type
+ self.patch_size = patch_size
+ self.query_nums=query_nums
+ self.batch_vision = batch_vision
def __len__(self):
return len(self.raw_data)
def __getitem__(self, i) -> Dict[str, torch.Tensor]:
image = Image.open(self.raw_data[i]["image"]).convert("RGB")
- ret = preprocess(image, self.raw_data[i]["conversations"], self.tokenizer, self.transform, slice_config=self.slice_config)
+ ret = preprocess(
+ image,
+ self.raw_data[i]["conversations"],
+ self.tokenizer,
+ self.transform,
+ query_nums=self.query_nums,
+ slice_config=self.slice_config,
+ llm_type=self.llm_type,
+ patch_size=self.patch_size,
+ batch_vision=self.batch_vision,
+ )
ret = dict(
input_ids=ret["input_ids"],
labels=ret["target"],
- attention_mask=ret["input_ids"].ne(self.tokenizer.pad_token_id),
+ attention_mask=torch.ones_like(ret["input_ids"], dtype=torch.bool),
pixel_values=ret["pixel_values"],
+ tgt_sizes=ret["tgt_sizes"],
image_bound=ret["image_bound"],
)
-
+
return ret
def data_collator(examples, padding_value=0):
- input_ids = pad_sequence([example["input_ids"] for example in examples], batch_first=True, padding_value=padding_value)
- targets = pad_sequence([example["labels"] for example in examples], batch_first=True, padding_value=padding_value)
- attention_mask = pad_sequence([example["attention_mask"] for example in examples], batch_first=True, padding_value=padding_value)
+ input_ids = pad_sequence(
+ [example["input_ids"] for example in examples],
+ batch_first=True,
+ padding_value=padding_value,
+ )
+ targets = pad_sequence(
+ [example["labels"] for example in examples],
+ batch_first=True,
+ padding_value=padding_value,
+ )
+ attention_mask = pad_sequence(
+ [example["attention_mask"] for example in examples],
+ batch_first=True,
+ padding_value=padding_value,
+ )
pixel_values = [example["pixel_values"] for example in examples]
image_bound = [example["image_bound"] for example in examples]
- return {"input_ids": input_ids, "labels":targets, "attention_mask": attention_mask, "image_bound": image_bound, "pixel_values": pixel_values}
+ tgt_sizes = [example["tgt_sizes"] for example in examples]
+ return {
+ "input_ids": input_ids,
+ "labels": targets,
+ "attention_mask": attention_mask,
+ "image_bound": image_bound,
+ "tgt_sizes": tgt_sizes,
+ "pixel_values": pixel_values,
+ }
-def conversation_to_ids(conversation, tokenizer):
+def conversation_to_ids(conversation, tokenizer, llm_type=None):
"""
for single image multi-turn conversation
conversation: [{'role': 'user', 'content': 'Describe this image'},
{'role': 'assistant', 'content': 'This is a cat.'}]
"""
- raw_msg = ''
- input_ids = []
- context = []
- for idx, msg in enumerate(conversation):
- role = msg['role']
- message = msg['content']
- assert role in ['user', 'assistant']
- if role == 'user':
- prefix = '<用户>'
- else:
- prefix = '
'
- # append eos
- if idx == len(conversation) - 1:
- message = message + tokenizer.eos_token
- prefix_ids = tokenizer.encode(prefix)[1:] # remove bos
- message_ids = tokenizer.encode(message)[1:]
+ if llm_type == "llama3":
+ input_ids, context, raw_msg = conversation_to_ids_llama3(
+ conversation, tokenizer
+ )
+ else:
+ input_ids, context, raw_msg = conversation_to_ids_minicpm(
+ conversation, tokenizer
+ )
- input_ids.append(prefix_ids)
- input_ids.append(message_ids)
-
- context.append(np.ones((len(prefix_ids),), dtype=np.int8))
- if role == 'assistant':
- context.append(np.zeros((len(message_ids),), dtype=np.int8))
- else:
- context.append(np.ones((len(message_ids),), dtype=np.int8))
-
- raw_msg += (prefix + message)
-
ids = torch.from_numpy(np.hstack(input_ids, dtype=np.int32))
context = torch.from_numpy(np.hstack(context, dtype=np.int8))
@@ -94,45 +119,137 @@ def conversation_to_ids(conversation, tokenizer):
if context[i] == 0:
target[i - 1] = ids[i]
if context[i] == 1 and context[i - 1] == 0:
- target[i - 1] = tokenizer.eos_id
+ if hasattr(tokenizer, "eot_id"):
+ target[i - 1] = tokenizer.eot_id
+ else:
+ target[i - 1] = tokenizer.eos_id
# build image bound
image_start_tokens = torch.where(ids == tokenizer.im_start_id)[0]
image_start_tokens += 1
image_end_tokens = torch.where(ids == tokenizer.im_end_id)[0]
if len(image_start_tokens) != len(image_end_tokens):
- print('image start token != image end tokens')
- if len(image_start_tokens)>0:
- image_bound = torch.hstack([image_start_tokens.unsqueeze(-1), image_end_tokens.unsqueeze(-1)])
+ print("image start token != image end tokens")
+ if len(image_start_tokens) > 0:
+ image_bound = torch.hstack(
+ [image_start_tokens.unsqueeze(-1), image_end_tokens.unsqueeze(-1)]
+ )
else:
image_bound = []
return {
- 'input_ids': ids,
- 'target': target,
- 'image_bound': image_bound,
- 'raw_msg': raw_msg,
+ "input_ids": ids,
+ "target": target,
+ "image_bound": image_bound,
+ "raw_msg": raw_msg,
}
-def preprocess(image, conversation, tokenizer, transform, query_nums=64, slice_config=None):
+def conversation_to_ids_minicpm(conversation, tokenizer):
+ raw_msg = ""
+ input_ids = []
+ context = []
+ for idx, msg in enumerate(conversation):
+ role = msg["role"]
+ message = msg["content"]
+ assert role in ["user", "assistant"]
+ if role == "user":
+ prefix = "<用户>"
+ else:
+ prefix = ""
+ # append eos
+ if idx == len(conversation) - 1:
+ message = message + tokenizer.eos_token
+ prefix_ids = tokenizer.encode(prefix)[1:] # remove bos
+ message_ids = tokenizer.encode(message)[1:]
+
+ input_ids.append(prefix_ids)
+ input_ids.append(message_ids)
+
+ context.append(np.ones((len(prefix_ids),), dtype=np.int8))
+ if role == "assistant":
+ context.append(np.zeros((len(message_ids),), dtype=np.int8))
+ else:
+ context.append(np.ones((len(message_ids),), dtype=np.int8))
+
+ raw_msg += prefix + message
+
+ return input_ids, context, raw_msg
+
+
+def conversation_to_ids_llama3(conversation, tokenizer):
+ raw_msg = ""
+ input_ids = []
+ context = []
+ raw_msg = tokenizer.apply_chat_template(
+ conversation, tokenize=False, add_generation_prompt=False
+ )
+ input_ids = tokenizer.apply_chat_template(
+ conversation, tokenize=True, add_generation_prompt=False
+ )
+ input_ids = np.array(input_ids)
+
+ start_header_idxs = np.where(
+ input_ids == tokenizer.convert_tokens_to_ids("<|start_header_id|>")
+ )[0]
+ assistant_idxs = np.where(
+ input_ids == tokenizer.convert_tokens_to_ids("assistant")
+ )[0]
+ end_header_idxs = np.where(
+ input_ids == tokenizer.convert_tokens_to_ids("<|end_header_id|>")
+ )[0]
+ eot_idxs = np.where(
+ input_ids == tokenizer.convert_tokens_to_ids("<|eot_id|>"))[0]
+
+ context = np.ones_like(input_ids, dtype=np.int8)
+
+ for assistant_idx in assistant_idxs:
+ if assistant_idx in set((start_header_idxs + end_header_idxs) / 2):
+ st = assistant_idx + 3 # assistant<|end_header_id|>\n\n
+ for eot_idx in eot_idxs:
+ if eot_idx > st:
+ context[st: eot_idx + 1] = 0
+ break
+
+ input_ids = np.hstack(input_ids)
+ context = np.hstack(context)
+
+ return input_ids, context, raw_msg
+
+
+def preprocess(
+ image,
+ conversation,
+ tokenizer,
+ transform,
+ query_nums=64,
+ slice_config=None,
+ llm_type=None,
+ patch_size=14,
+ batch_vision=False,
+):
"""
single image preprocess, the image will be placed at the top of the conversation
"""
conversation = copy.deepcopy(conversation)
assert len(conversation) > 1, "conversation length must large than 2"
- assert conversation[0]['role'] == 'user', "the first role must be user"
+ assert conversation[0]["role"] == "user", "the first role must be user"
if slice_config is not None:
assert isinstance(slice_config, Dict)
- assert 'patch_size' in slice_config
- assert 'max_slice_nums' in slice_config
- assert 'scale_resolution' in slice_config
- default_image_placeholder = tokenizer.im_start + tokenizer.unk_token * query_nums + tokenizer.im_end
+ assert "patch_size" in slice_config
+ assert "max_slice_nums" in slice_config
+ assert "scale_resolution" in slice_config
+ default_image_placeholder = (
+ tokenizer.im_start + tokenizer.unk_token * query_nums + tokenizer.im_end
+ )
if slice_config:
images = []
source_image, patches, best_grid = slice_image(
- image, slice_config['max_slice_nums'], slice_config['scale_resolution'], slice_config['patch_size']
+ image,
+ slice_config["max_slice_nums"],
+ slice_config["scale_resolution"],
+ slice_config["patch_size"],
)
images.append(source_image)
image_placeholder = default_image_placeholder
@@ -142,30 +259,51 @@ def preprocess(image, conversation, tokenizer, transform, query_nums=64, slice_c
images.append(patches[i][j])
image_placeholder += get_grid_placeholder(
- tokenizer, best_grid, query_nums
- )
+ tokenizer, best_grid, query_nums)
images = [transform(i) for i in images]
else:
images = [transform(image)]
image_placeholder = default_image_placeholder
- if '' in conversation[0]['content']:
- conversation[0]['content'] = conversation[0]['content'].replace('', image_placeholder)
+ if "" in conversation[0]["content"]:
+ conversation[0]["content"] = conversation[0]["content"].replace(
+ "", image_placeholder
+ )
else:
- conversation[0]['content'] = image_placeholder + '\n' + conversation[0]['content']
+ conversation[0]["content"] = (
+ image_placeholder + "\n" + conversation[0]["content"]
+ )
+
+ input_dict = conversation_to_ids(conversation, tokenizer, llm_type)
+
+ if batch_vision:
+ tgt_sizes = []
+ reshape_images = []
+ for image in images:
+ H, W = image.shape[1:]
+ reshape_image = reshape_by_patch(image, patch_size)
+ reshape_images.append(reshape_image)
+ tgt_sizes.append([H // patch_size, W // patch_size])
+ if tgt_sizes:
+ tgt_sizes = torch.Tensor(tgt_sizes).type(torch.int32)
+
+ input_dict["pixel_values"] = reshape_images
+ input_dict["tgt_sizes"] = tgt_sizes
+
+ else:
+ input_dict["pixel_values"] = images
+ input_dict["tgt_sizes"] = []
- input_dict = conversation_to_ids(conversation, tokenizer)
- input_dict['pixel_values'] = images
return input_dict
-
def slice_image(
image, max_slice_nums=9, scale_resolution=448, patch_size=14, never_split=False
):
original_size = image.size
original_width, original_height = original_size
log_ratio = math.log(original_width / original_height)
- ratio = original_width * original_height / (scale_resolution * scale_resolution)
+ ratio = original_width * original_height / \
+ (scale_resolution * scale_resolution)
multiple = min(math.ceil(ratio), max_slice_nums)
source_image = None
@@ -186,7 +324,8 @@ def slice_image(
candidate_split_grids_nums.append(i)
# source image, down-sampling and ensure divided by patch_size
- best_resize = find_best_resize(original_size, scale_resolution, patch_size)
+ best_resize = find_best_resize(
+ original_size, scale_resolution, patch_size)
source_image = image.copy().resize(best_resize, Image.Resampling.BICUBIC)
candidate_grids = []
@@ -285,6 +424,22 @@ def get_grid_placeholder(tokenizer, grid, query_num):
for j in range(cols):
lines.append(image_placeholder)
slices.append("".join(lines))
- slice_placeholder = tokenizer.slice_start + "\n".join(slices) + tokenizer.slice_end
+ slice_placeholder = tokenizer.slice_start + \
+ "\n".join(slices) + tokenizer.slice_end
return slice_placeholder
+
+def reshape_by_patch(image_tensor, patch_size):
+ """
+ :param image_tensor: shape [3, H, W]
+ :param patch_size:
+ :return: [3, patch_size, HW/patch_size]
+ """
+ patches = torch.nn.functional.unfold(
+ image_tensor, (patch_size, patch_size), stride=(patch_size, patch_size)
+ )
+
+ patches = patches.reshape(image_tensor.size(0), patch_size, patch_size, -1)
+ patches = patches.permute(0, 1, 3, 2).reshape(
+ image_tensor.size(0), patch_size, -1)
+ return patches
diff --git a/finetune/finetune.py b/finetune/finetune.py
index 6a751e3..808700a 100644
--- a/finetune/finetune.py
+++ b/finetune/finetune.py
@@ -1,22 +1,22 @@
-import os
import glob
import json
import logging
+import os
from dataclasses import dataclass, field
-from typing import Dict, Optional, List
+from typing import Dict, List, Optional
+
import torch
-from torch.utils.data import Dataset
import transformers
-from trainer import CPMTrainer
-from deepspeed.runtime.zero.partition_parameters import ZeroParamStatus
-from deepspeed import zero
-
-from dataset import data_collator, SupervisedDataset
-
-
-from PIL import Image
-from transformers import AutoModel, AutoTokenizer
from accelerate.utils import DistributedType
+from deepspeed import zero
+from deepspeed.runtime.zero.partition_parameters import ZeroParamStatus
+from PIL import Image
+from torch.utils.data import Dataset
+from transformers import AutoModel, AutoTokenizer
+
+from dataset import SupervisedDataset, data_collator
+from trainer import CPMTrainer
+
@dataclass
class ModelArguments:
@@ -44,6 +44,8 @@ class TrainingArguments(transformers.TrainingArguments):
"help": "Maximum sequence length. Sequences will be right padded (and possibly truncated)."
},
)
+ tune_vision: Optional[bool] = field(default=True)
+ tune_llm: Optional[bool] = field(default=True)
def rank0_print(*args):
@@ -52,7 +54,15 @@ def rank0_print(*args):
def make_supervised_data_module(
- tokenizer: transformers.PreTrainedTokenizer, data_args, transform, data_collator=None, slice_config=None,
+ tokenizer: transformers.PreTrainedTokenizer,
+ data_args,
+ transform,
+ data_collator=None,
+ llm_type="minicpm",
+ slice_config=None,
+ patch_size=14,
+ query_nums=64,
+ batch_vision=False,
) -> Dict:
"""Make dataset and collator for supervised fine-tuning."""
dataset_cls = SupervisedDataset
@@ -60,19 +70,57 @@ def make_supervised_data_module(
rank0_print("Loading data...")
train_json = json.load(open(data_args.data_path, "r"))
- train_dataset = dataset_cls(train_json, transform, tokenizer, slice_config=slice_config)
+ train_dataset = dataset_cls(
+ train_json,
+ transform,
+ tokenizer,
+ slice_config=slice_config,
+ llm_type=llm_type,
+ patch_size=patch_size,
+ query_nums=query_nums,
+ batch_vision=batch_vision,
+ )
if data_args.eval_data_path:
eval_json = json.load(open(data_args.eval_data_path, "r"))
- eval_dataset = dataset_cls(eval_json, transform, tokenizer, slice_config=slice_config)
+ eval_dataset = dataset_cls(
+ eval_json,
+ transform,
+ tokenizer,
+ slice_config=slice_config,
+ llm_type=llm_type,
+ patch_size=patch_size,
+ query_nums=query_nums,
+ batch_vision=batch_vision,
+ )
else:
eval_dataset = None
- return dict(train_dataset=train_dataset, eval_dataset=eval_dataset, data_collator=data_collator)
+ return dict(
+ train_dataset=train_dataset,
+ eval_dataset=eval_dataset,
+ data_collator=data_collator,
+ )
+
+
+def get_parameter_number(model):
+ trainable_params, all_param = 0, 0
+ for param in model.parameters():
+ num_params = param.numel()
+ # if using DS Zero 3 and the weights are initialized empty
+ if num_params == 0 and hasattr(param, "ds_numel"):
+ num_params = param.ds_numel
+
+ all_param += num_params
+ if param.requires_grad:
+ trainable_params += num_params
+
+ return {'Total': all_param, 'Trainable': trainable_params}
local_rank = 0
+
def train():
global local_rank
@@ -85,8 +133,8 @@ def train():
data_args,
training_args,
) = parser.parse_args_into_dataclasses()
-
- if getattr(training_args, 'deepspeed', None):
+
+ if getattr(training_args, "deepspeed", None):
training_args.distributed_state.distributed_type = DistributedType.DEEPSPEED
compute_dtype = (
@@ -99,14 +147,50 @@ def train():
world_size = int(os.environ.get("WORLD_SIZE", 1))
ddp = world_size != 1
- device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} if ddp else None
-
- model = AutoModel.from_pretrained(model_args.model_name_or_path, trust_remote_code=True, torch_dtype=compute_dtype, device_map=device_map)
- tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, trust_remote_code=True)
- #Load data
+ device_map = {"": int(os.environ.get("LOCAL_RANK") or 0)} if ddp else None
+
+ model = AutoModel.from_pretrained(
+ model_args.model_name_or_path,
+ trust_remote_code=True,
+ torch_dtype=compute_dtype,
+ device_map=device_map,
+ )
+ tokenizer = AutoTokenizer.from_pretrained(
+ model_args.model_name_or_path, trust_remote_code=True
+ )
+
+ if not training_args.tune_vision:
+ model.vpm.requires_grad_(False)
+ if not training_args.tune_llm:
+ model.llm.requires_grad_(False)
+ rank0_print(get_parameter_number(model))
+
+ llm_type = "minicpm"
+ if "llama3" in model.name_or_path.lower():
+ tokenizer.chat_template = "{% set loop_messages = messages %}{% for message in loop_messages %}{% set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n'+ message['content'] | trim + '<|eot_id|>' %}{% if loop.index0 == 0 %}{% set content = bos_token + content %}{% endif %}{{ content }}{% endfor %}"
+ llm_type = "llama3"
+
+ # Load data
+ if hasattr(model.config, "slice_config"):
+ slice_config = model.config.slice_config.to_dict()
+ else:
+ slice_config = model.config.to_dict()
+ if hasattr(model.config, "batch_vision_input"):
+ batch_vision = model.config.batch_vision_input
+ else:
+ batch_vision = False
+
data_module = make_supervised_data_module(
- tokenizer=tokenizer, data_args=data_args, transform=model.transform, data_collator=data_collator, slice_config=model.config.__dict__,
+ tokenizer=tokenizer,
+ data_args=data_args,
+ transform=model.transform,
+ data_collator=data_collator,
+ slice_config=slice_config,
+ llm_type=llm_type,
+ patch_size=model.config.patch_size,
+ query_nums=model.config.query_num,
+ batch_vision=batch_vision,
)
trainer = CPMTrainer(
@@ -115,11 +199,10 @@ def train():
args=training_args,
**data_module,
)
-
+
trainer.train()
trainer.save_state()
if __name__ == "__main__":
train()
-
diff --git a/finetune/readme.md b/finetune/readme.md
index 26c3c47..bc6c69b 100644
--- a/finetune/readme.md
+++ b/finetune/readme.md
@@ -1,18 +1,13 @@
-# Minicpm-V2 Finetuning
+# MiniCPM-V Finetuning
-
-[English](README.md)
-
-
-
-We offer the official scripts for easy finetuning of the pretrained minicpm-v2 model on downstream tasks. Our finetune scripts use DeepSpeed by default.
+We offer the official scripts for easy finetuning of the pretrained **MiniCPM-Llama3-V 2.5** and **MiniCPM-V 2.0** on downstream tasks. Our finetune scripts use transformers Trainer and DeepSpeed by default.
### Data preparation
-To prepare your finetuning data, you should (1) formulate each sample as a dictionary consisting of an id, an image path list with an image (optional, not required for pure-text example), and a list of conversations, and (2) save data samples in JSON files.
+To prepare your finetuning data, you should formulate each sample as a dictionary consisting of an id, an image path list with an image, and a list of conversations. Then save data samples in JSON files.
-For the vision-language example with image, you are required to define placeholder(s) to define the position to insert the image embeddings.
+For the vision-language example with image, you are required to provide **\** to define the position to insert the image embeddings. If you don't provide \, the image will be placed at the front of the conversation.
@@ -57,10 +52,19 @@ For the vision-language example with image, you are required to define placehold
### Full-parameter finetuning
-Full-parameter parameter finetuning requires updating all parameters of LLM in the whole training process. To launch your training, run the following script:
+Full-parameter parameter finetuning requires updating all parameters of LLM in the whole training process. Please specify the correct MODEL path and DATA path in the shell scripts.
+
+```shell
+MODEL="openbmb/MiniCPM-Llama3-V-2_5" # or openbmb/MiniCPM-V-2
+DATA="path/to/trainging_data" # json file
+EVAL_DATA="path/to/test_data" # json file
+```
+
+To launch your training, run the following script:
```
sh finetune_ds.sh
```
+
#### Customizing Hyperparameters
To tailor the training process according to your specific requirements, you can adjust various hyperparameters. For comprehensive documentation on available hyperparameters and their functionalities, you can refer to the [official Transformers documentation](https://huggingface.co/docs/transformers/main_classes/trainer#transformers.TrainingArguments). Experimentation and fine-tuning of these parameters are essential for achieving optimal model performance tailored to your specific task and dataset.
diff --git a/finetune/trainer.py b/finetune/trainer.py
index 53ffaa6..ae2f935 100644
--- a/finetune/trainer.py
+++ b/finetune/trainer.py
@@ -1,23 +1,22 @@
+from typing import Any, Dict, List, Optional, Tuple, Union
+
import torch
import torch.nn as nn
-from typing import Tuple, Union, Optional, List, Dict, Any
from transformers import Trainer
from transformers.trainer_pt_utils import nested_detach
from transformers.utils import is_sagemaker_mp_enabled
+
+
class CPMTrainer(Trainer):
- def compute_loss(
- self,
- model,
- inputs,
- return_outputs=False
- ):
+ def compute_loss(self, model, inputs, return_outputs=False):
if "labels" in inputs:
labels = inputs.pop("labels")
else:
labels = None
-
- vllm_embedding, vision_hidden_states = self.model.get_vllm_embedding(inputs)
-
+
+ vllm_embedding, vision_hidden_states = self.model.get_vllm_embedding(
+ inputs)
+
outputs = self.model.llm(
inputs_embeds=vllm_embedding,
use_cache=False,
@@ -26,7 +25,8 @@ class CPMTrainer(Trainer):
if labels is not None:
# Flatten the tokens
loss_fct = nn.CrossEntropyLoss()
- logits = outputs.logits.view(-1, self.model.config.vocab_size).contiguous()
+ logits = outputs.logits.view(-1,
+ self.model.config.vocab_size).contiguous()
labels = labels.view(-1).long().contiguous()
# Enable model parallelism
labels = labels.to(logits.device)
@@ -35,19 +35,20 @@ class CPMTrainer(Trainer):
if isinstance(outputs, dict) and "loss" not in outputs:
raise ValueError(
"The model did not return a loss from the inputs, only the following keys: "
- f"{','.join(outputs.keys())}. For reference, the inputs it received are {','.join(inputs.keys())}."
+ f"{','.join(outputs.keys())}. For reference, the inputs it received are {
+ ','.join(inputs.keys())}."
)
# We don't use .loss here since the model may return tuples instead of ModelOutput.
loss = outputs["loss"] if isinstance(outputs, dict) else outputs[0]
return (loss, outputs) if return_outputs else loss
-
+
def prediction_step(
- self,
- model: nn.Module,
- inputs:Dict[str, Union[torch.Tensor, Any]],
- prediction_loss_only: bool,
- ignore_keys: Optional[List[str]] = None,
+ self,
+ model: nn.Module,
+ inputs: Dict[str, Union[torch.Tensor, Any]],
+ prediction_loss_only: bool,
+ ignore_keys: Optional[List[str]] = None,
) -> Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]:
"""
Perform an evaluation step on `model` using `inputs`.
@@ -72,25 +73,34 @@ class CPMTrainer(Trainer):
Tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]: A tuple with the loss,
logits and labels (each being optional).
"""
- has_labels = False if len(self.label_names) == 0 else all(inputs.get(k) is not None for k in self.label_names)
+ has_labels = (
+ False
+ if len(self.label_names) == 0
+ else all(inputs.get(k) is not None for k in self.label_names)
+ )
# For CLIP-like models capable of returning loss values.
# If `return_loss` is not specified or being `None` in `inputs`, we check if the default value of `return_loss`
# is `True` in `model.forward`.
return_loss = inputs.get("return_loss", None)
if return_loss is None:
return_loss = self.can_return_loss
- loss_without_labels = True if len(self.label_names) == 0 and return_loss else False
+ loss_without_labels = (
+ True if len(self.label_names) == 0 and return_loss else False
+ )
inputs = self._prepare_inputs(inputs)
if ignore_keys is None:
if hasattr(self.model, "config"):
- ignore_keys = getattr(self.model.config, "keys_to_ignore_at_inference", [])
+ ignore_keys = getattr(
+ self.model.config, "keys_to_ignore_at_inference", []
+ )
else:
ignore_keys = []
# labels may be popped when computing the loss (label smoothing for instance) so we grab them first.
if has_labels or loss_without_labels:
- labels = nested_detach(tuple(inputs.get(name) for name in self.label_names))
+ labels = nested_detach(tuple(inputs.get(name)
+ for name in self.label_names))
if len(labels) == 1:
labels = labels[0]
else:
@@ -102,7 +112,11 @@ class CPMTrainer(Trainer):
if has_labels or loss_without_labels:
if isinstance(raw_outputs, dict):
loss_mb = raw_outputs["loss"]
- logits_mb = tuple(v for k, v in raw_outputs.items() if k not in ignore_keys + ["loss"])
+ logits_mb = tuple(
+ v
+ for k, v in raw_outputs.items()
+ if k not in ignore_keys + ["loss"]
+ )
else:
loss_mb = raw_outputs[0]
logits_mb = raw_outputs[1:]
@@ -112,18 +126,26 @@ class CPMTrainer(Trainer):
else:
loss = None
if isinstance(raw_outputs, dict):
- logits_mb = tuple(v for k, v in raw_outputs.items() if k not in ignore_keys)
+ logits_mb = tuple(
+ v for k, v in raw_outputs.items() if k not in ignore_keys
+ )
else:
logits_mb = raw_outputs
logits = smp_nested_concat(logits_mb)
else:
if has_labels or loss_without_labels:
with self.compute_loss_context_manager():
- loss, outputs = self.compute_loss(model, inputs, return_outputs=True)
+ loss, outputs = self.compute_loss(
+ model, inputs, return_outputs=True
+ )
loss = loss.mean().detach()
if isinstance(outputs, dict):
- logits = tuple(v for k, v in outputs.items() if k not in ignore_keys + ["loss"])
+ logits = tuple(
+ v
+ for k, v in outputs.items()
+ if k not in ignore_keys + ["loss"]
+ )
else:
logits = outputs[1:]
else:
@@ -131,7 +153,9 @@ class CPMTrainer(Trainer):
with self.compute_loss_context_manager():
outputs = model(**inputs)
if isinstance(outputs, dict):
- logits = tuple(v for k, v in outputs.items() if k not in ignore_keys)
+ logits = tuple(
+ v for k, v in outputs.items() if k not in ignore_keys
+ )
else:
logits = outputs
# TODO: this needs to be fixed and made cleaner later.
@@ -146,5 +170,3 @@ class CPMTrainer(Trainer):
logits = logits[0]
return (loss, logits, labels)
-
-
diff --git a/minicpm_v1.md b/minicpm_v1.md
index ecf1d98..af345c2 100644
--- a/minicpm_v1.md
+++ b/minicpm_v1.md
@@ -1,4 +1,8 @@
## MiniCPM-V 1.0
+
+
+> Archive at:2024-05-19
+
MiniCPM-V 1.0 is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of MiniCPM-V 1.0 include:
- ⚡️ **High Efficiency.**
diff --git a/omnilmm.md b/omnilmm.md
new file mode 100644
index 0000000..14510be
--- /dev/null
+++ b/omnilmm.md
@@ -0,0 +1,183 @@
+## OmniLMM-12B
+
+> OmniLMM-12B 发布于本项目早期。推荐您使用我们[最新发布的模型](./README.md),以获得更高效的推理和更强大的性能体验。
+
+> 归档时间:2024-05-19
+
+**OmniLMM-12B** 是当前系列中性能最佳的版本。该模型基于EVA02-5B和Zephyr-7B-β初始化构建,并使用perceiver resampler连接,采用了课程学习的方法在多模态数据上进行训练。该模型具有三个特点:
+
+- 🔥 **性能领先。**
+
+ OmniLMM-12B 相比其他同规模模型在多个基准测试中取得**领先的性能**(包括 MME、MMBench、SEED-Bench 等),模型掌握了较为丰富的多模态世界知识。
+
+- 🏆 **行为可信。**
+
+ 多模态大模型的幻觉问题备受关注,模型经常生成和图像中的事实不符的文本(例如,确信地描述图片中并不存在的物体)。OmniLMM-12B是 **第一个通过多模态 RLHF 对齐的综合能力优秀的开源多模态大模型**(借助 [RLHF-V](https://rlhf-v.github.io/) [CVPR'24] 系列技术)。该模型在 [MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench) 幻觉评测基准上达到**开源模型最佳水平**,并在 [Object HalBench](https://arxiv.org/abs/2312.00849) 中**优于GPT-4V**。
+
+- 🕹 **实时多模态交互。**
+
+ 我们尝试结合OmniLMM-12B和GPT-3.5 (纯文本模型) ,实现**实时多模态交互助手**。该模型接受来自摄像头的视频流,并借助工具处理语音输入输出。虽然还很初步,我们发现该模型无需视频编辑可以**复现Gemini演示视频中的一些有趣例子**。
+
+### 评测结果
+
+
+

+
+
+ MME, MMBench, MMMU, MMBench, MMHal-Bench, Object HalBench, SeedBench, LLaVA Bench W, MathVista 上的详细评测结果。
+
+
+
+
+ | Model |
+ Size |
+ MME |
+ MMB dev (en) |
+ MMMU val |
+ MMHal-Bench |
+ Object HalBench |
+ SeedBench-I |
+ MathVista |
+ LLaVA Bench |
+
+
+
+
+ | GPT-4V† |
+ - |
+ 1771.5 |
+ 75.1 |
+ 56.8 |
+ 3.53 / 70.8 |
+ 86.4 / 92.7 |
+ 71.6 |
+ 47.8 |
+ 93.1 |
+
+
+ | Qwen-VL-Plus† |
+ - |
+ 2183.4 |
+ 66.2 |
+ 45.2 |
+ - |
+ - |
+ 65.7 |
+ 36.0 |
+ 73.7 |
+
+
+ | Yi-VL 6B |
+ 6.7B |
+ 1915.1 |
+ 68.6 |
+ 40.3 |
+ - |
+ - |
+ 67.5 |
+ 28.8 |
+ 51.9 |
+
+
+ | Qwen-VL-Chat |
+ 9.6B |
+ 1860.0 |
+ 60.6 |
+ 35.9 |
+ 2.93 / 59.4 |
+ 56.2 / 80.0 |
+ 64.8 |
+ 33.8 |
+ 67.7 |
+
+
+ | CogVLM-Chat |
+ 17.4B |
+ 1736.6 |
+ 63.7 |
+ 32.1 |
+ 2.68 / 52.1 |
+ 73.6 / 87.4 |
+ 68.8 |
+ 34.7 |
+ 73.9 |
+
+
+ | LLaVA 1.5 |
+ 13.6B |
+ 1808.4 |
+ 68.2 |
+ 36.4 |
+ 2.71 / 51.0 |
+ 53.7 / 77.4 |
+ 68.1 |
+ 26.4 |
+ 64.6 |
+
+
+ | OmniLMM-12B |
+ 11.6B |
+ 1935.8 |
+ 71.6 |
+ 40.7 |
+ 3.45 / 68.8 |
+ 90.3 / 95.5 |
+ 71.1 |
+ 34.9 |
+ 72.0 |
+
+
+
+†: 闭源模型
+
+
+
+### 典型示例
+
+
+
+
+
+
+
+
+我们结合 OmniLMM-12B 和 ChatGPT-3.5 (纯文本模型) 尝试构建 **实时多模态交互助手**. OmniLMM-12B 将视频帧转为对应的图像描述并输入给ChatGPT-3.5来生成对用户指令的响应。演示视频未经编辑。
+
+
+
+
+## Online Demo
+
+欢迎通过以下链接使用我们的网页端推理服务: [OmniLMM-12B](http://120.92.209.146:8081) | [MiniCPM-V 2.0](http://120.92.209.146:80).
+
+## 安装
+
+1. 克隆我们的仓库并跳转到相应目录
+
+```bash
+git clone https://github.com/OpenBMB/MiniCPM-V.git
+cd MiniCPM-V
+```
+
+1. 创建 conda 环境
+
+```Shell
+conda create -n MiniCPMV python=3.10 -y
+conda activate MiniCPMV
+```
+
+3. 安装依赖
+
+```shell
+pip install -r requirements.txt
+```
+
+## 推理
+
+### 模型库
+
+| 模型 | 简介 | 下载链接 |
+|:----------------------|:-------------------|:---------------:|
+| OmniLMM-12B | 性能最强的版本 | [🤗](https://huggingface.co/openbmb/OmniLMM-12B) [
](https://modelscope.cn/models/OpenBMB/OmniLMM-12B/files) |
+
diff --git a/omnilmm_en.md b/omnilmm_en.md
new file mode 100644
index 0000000..6782d44
--- /dev/null
+++ b/omnilmm_en.md
@@ -0,0 +1,155 @@
+## OmniLMM-12B
+
+> OmniLMM-12B is released at early time of this project. We recommond you to use our [recently released models](./README.md), for better performance and efficiency.
+
+> Archieve at: 2024-05-19
+
+
+**OmniLMM-12B** is the most capable version. The model is built based on EVA02-5B and Zephyr-7B-β, connected with a perceiver resampler layer, and trained on multimodal data in a curriculum fashion. The model has three notable features:
+
+- 🔥 **Strong Performance.**
+
+ OmniLMM-12B achieves **leading performance** among models with comparable sizes, surpassing established LMMs on multiple benchmarks (including MME, MMBench, SEED-Bench, etc). The model also endows rich multi-modal world knowledge.
+
+- 🏆 **Trustworthy Behavior.**
+
+ LMMs are known for suffering from hallucination, often generating text that is not factually grounded in images (e.g., faithfully describing non-existing objects in images). OmniLMM-12B is **the first state-of-the-art open-source LMM aligned via multimodal RLHF for trustworthy behavior** (using the recent [RLHF-V](https://rlhf-v.github.io/) technique). It **ranks #1** among open-source models on [MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench), and **outperforms GPT-4V** on [Object HalBench](https://arxiv.org/abs/2312.00849).
+
+- 🕹 **Real-time Multimodal Interaction.**
+
+ We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can **replicate some of the fun cases shown in the Gemini Demo video, without any video edition**.
+
+
+### Evaluation
+
+

+
+
+Click to view results on MME, MMBench, MMMU, MMBench, MMHal-Bench, Object HalBench, SeedBench, LLaVA Bench, MathVista.
+
+
+
+
+ | Model |
+ Size |
+ MME |
+ MMB dev (en) |
+ MMMU val |
+ MMHal-Bench |
+ Object HalBench |
+ SeedBench-I |
+ MathVista |
+ LLaVA Bench |
+
+
+
+
+ | GPT-4V† |
+ - |
+ 1771.5 |
+ 75.1 |
+ 56.8 |
+ 3.53 / 70.8 |
+ 86.4 / 92.7 |
+ 71.6 |
+ 47.8 |
+ 93.1 |
+
+
+ | Qwen-VL-Plus† |
+ - |
+ 2183.4 |
+ 66.2 |
+ 45.2 |
+ - |
+ - |
+ 65.7 |
+ 36.0 |
+ 73.7 |
+
+
+ | Yi-VL 6B |
+ 6.7B |
+ 1915.1 |
+ 68.6 |
+ 40.3 |
+ - |
+ - |
+ 67.5 |
+ 28.8 |
+ 51.9 |
+
+
+ | Qwen-VL-Chat |
+ 9.6B |
+ 1860.0 |
+ 60.6 |
+ 35.9 |
+ 2.93 / 59.4 |
+ 56.2 / 80.0 |
+ 64.8 |
+ 33.8 |
+ 67.7 |
+
+
+ | CogVLM-Chat |
+ 17.4B |
+ 1736.6 |
+ 63.7 |
+ 32.1 |
+ 2.68 / 52.1 |
+ 73.6 / 87.4 |
+ 68.8 |
+ 34.7 |
+ 73.9 |
+
+
+ | LLaVA 1.5 |
+ 13.6B |
+ 1808.4 |
+ 68.2 |
+ 36.4 |
+ 2.71 / 51.0 |
+ 53.7 / 77.4 |
+ 68.1 |
+ 26.4 |
+ 64.6 |
+
+
+ | OmniLMM-12B |
+ 11.6B |
+ 1935.8 |
+ 71.6 |
+ 40.7 |
+ 3.45 / 68.8 |
+ 90.3 / 95.5 |
+ 71.1 |
+ 34.9 |
+ 72.0 |
+
+
+
+†: Proprietary models
+
+
+
+### Examples
+
+
+
+
+
+
+
+
+We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. Video frames are described in text using OmniLMM-12B, and ChatGPT 3.5 (text-only) is employed to generate response according to the descriptions and user prompts. The demo video is a raw recording without edition.
+
+
+
+
+### Model Zoo
+
+| Model | Description | Download Link |
+|:----------------------|:-------------------|:---------------:|
+| OmniLMM-12B | The most capable version with leading performance. | [🤗](https://huggingface.co/openbmb/OmniLMM-12B) [
](https://modelscope.cn/models/OpenBMB/OmniLMM-12B/files) |
diff --git a/requirements.txt b/requirements.txt
index 1ffb228..9867617 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -21,10 +21,10 @@ torch==2.1.2
torchvision==0.16.2
tqdm==4.66.1
protobuf==4.25.0
-transformers==4.36.0
+transformers==4.40.0
typing_extensions==4.8.0
uvicorn==0.24.0.post1
#xformers==0.0.22.post7
#flash_attn==2.3.4
sentencepiece==0.1.99
-accelerate==0.24.1
+accelerate==0.30.1
diff --git a/web_demo_2.5.py b/web_demo_2.5.py
new file mode 100644
index 0000000..4d11356
--- /dev/null
+++ b/web_demo_2.5.py
@@ -0,0 +1,252 @@
+#!/usr/bin/env python
+# encoding: utf-8
+import gradio as gr
+from PIL import Image
+import traceback
+import re
+import torch
+import argparse
+from transformers import AutoModel, AutoTokenizer
+
+# README, How to run demo on different devices
+
+# For Nvidia GPUs.
+# python web_demo_2.5.py --device cuda
+
+# For Mac with MPS (Apple silicon or AMD GPUs).
+# PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
+
+# Argparser
+parser = argparse.ArgumentParser(description='demo')
+parser.add_argument('--device', type=str, default='cuda', help='cuda or mps')
+args = parser.parse_args()
+device = args.device
+assert device in ['cuda', 'mps']
+
+# Load model
+model_path = 'openbmb/MiniCPM-Llama3-V-2_5'
+model = AutoModel.from_pretrained(model_path, trust_remote_code=True).to(dtype=torch.float16)
+tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
+
+model = model.to(device=device)
+model.eval()
+
+
+
+ERROR_MSG = "Error, please retry"
+model_name = 'MiniCPM-V 2.5'
+
+form_radio = {
+ 'choices': ['Beam Search', 'Sampling'],
+ #'value': 'Beam Search',
+ 'value': 'Sampling',
+ 'interactive': True,
+ 'label': 'Decode Type'
+}
+# Beam Form
+num_beams_slider = {
+ 'minimum': 0,
+ 'maximum': 5,
+ 'value': 3,
+ 'step': 1,
+ 'interactive': True,
+ 'label': 'Num Beams'
+}
+repetition_penalty_slider = {
+ 'minimum': 0,
+ 'maximum': 3,
+ 'value': 1.2,
+ 'step': 0.01,
+ 'interactive': True,
+ 'label': 'Repetition Penalty'
+}
+repetition_penalty_slider2 = {
+ 'minimum': 0,
+ 'maximum': 3,
+ 'value': 1.05,
+ 'step': 0.01,
+ 'interactive': True,
+ 'label': 'Repetition Penalty'
+}
+max_new_tokens_slider = {
+ 'minimum': 1,
+ 'maximum': 4096,
+ 'value': 1024,
+ 'step': 1,
+ 'interactive': True,
+ 'label': 'Max New Tokens'
+}
+
+top_p_slider = {
+ 'minimum': 0,
+ 'maximum': 1,
+ 'value': 0.8,
+ 'step': 0.05,
+ 'interactive': True,
+ 'label': 'Top P'
+}
+top_k_slider = {
+ 'minimum': 0,
+ 'maximum': 200,
+ 'value': 100,
+ 'step': 1,
+ 'interactive': True,
+ 'label': 'Top K'
+}
+temperature_slider = {
+ 'minimum': 0,
+ 'maximum': 2,
+ 'value': 0.7,
+ 'step': 0.05,
+ 'interactive': True,
+ 'label': 'Temperature'
+}
+
+
+def create_component(params, comp='Slider'):
+ if comp == 'Slider':
+ return gr.Slider(
+ minimum=params['minimum'],
+ maximum=params['maximum'],
+ value=params['value'],
+ step=params['step'],
+ interactive=params['interactive'],
+ label=params['label']
+ )
+ elif comp == 'Radio':
+ return gr.Radio(
+ choices=params['choices'],
+ value=params['value'],
+ interactive=params['interactive'],
+ label=params['label']
+ )
+ elif comp == 'Button':
+ return gr.Button(
+ value=params['value'],
+ interactive=True
+ )
+
+
+def chat(img, msgs, ctx, params=None, vision_hidden_states=None):
+ default_params = {"num_beams":3, "repetition_penalty": 1.2, "max_new_tokens": 1024}
+ if params is None:
+ params = default_params
+ if img is None:
+ return -1, "Error, invalid image, please upload a new image", None, None
+ try:
+ image = img.convert('RGB')
+ answer = model.chat(
+ image=image,
+ msgs=msgs,
+ tokenizer=tokenizer,
+ **params
+ )
+ res = re.sub(r'(.*)', '', answer)
+ res = res.replace('[', '')
+ res = res.replace(']', '')
+ res = res.replace('', '')
+ answer = res.replace('', '')
+ return -1, answer, None, None
+ except Exception as err:
+ print(err)
+ traceback.print_exc()
+ return -1, ERROR_MSG, None, None
+
+
+def upload_img(image, _chatbot, _app_session):
+ image = Image.fromarray(image)
+
+ _app_session['sts']=None
+ _app_session['ctx']=[]
+ _app_session['img']=image
+ _chatbot.append(('', 'Image uploaded successfully, you can talk to me now'))
+ return _chatbot, _app_session
+
+
+def respond(_question, _chat_bot, _app_cfg, params_form, num_beams, repetition_penalty, repetition_penalty_2, top_p, top_k, temperature):
+ if _app_cfg.get('ctx', None) is None:
+ _chat_bot.append((_question, 'Please upload an image to start'))
+ return '', _chat_bot, _app_cfg
+
+ _context = _app_cfg['ctx'].copy()
+ if _context:
+ _context.append({"role": "user", "content": _question})
+ else:
+ _context = [{"role": "user", "content": _question}]
+ print(':', _question)
+
+ if params_form == 'Beam Search':
+ params = {
+ 'sampling': False,
+ 'num_beams': num_beams,
+ 'repetition_penalty': repetition_penalty,
+ "max_new_tokens": 896
+ }
+ else:
+ params = {
+ 'sampling': True,
+ 'top_p': top_p,
+ 'top_k': top_k,
+ 'temperature': temperature,
+ 'repetition_penalty': repetition_penalty_2,
+ "max_new_tokens": 896
+ }
+ code, _answer, _, sts = chat(_app_cfg['img'], _context, None, params)
+ print(':', _answer)
+
+ _context.append({"role": "assistant", "content": _answer})
+ _chat_bot.append((_question, _answer))
+ if code == 0:
+ _app_cfg['ctx']=_context
+ _app_cfg['sts']=sts
+ return '', _chat_bot, _app_cfg
+
+
+def regenerate_button_clicked(_question, _chat_bot, _app_cfg, params_form, num_beams, repetition_penalty, repetition_penalty_2, top_p, top_k, temperature):
+ if len(_chat_bot) <= 1:
+ _chat_bot.append(('Regenerate', 'No question for regeneration.'))
+ return '', _chat_bot, _app_cfg
+ elif _chat_bot[-1][0] == 'Regenerate':
+ return '', _chat_bot, _app_cfg
+ else:
+ _question = _chat_bot[-1][0]
+ _chat_bot = _chat_bot[:-1]
+ _app_cfg['ctx'] = _app_cfg['ctx'][:-2]
+ return respond(_question, _chat_bot, _app_cfg, params_form, num_beams, repetition_penalty, repetition_penalty_2, top_p, top_k, temperature)
+
+
+
+with gr.Blocks() as demo:
+ with gr.Row():
+ with gr.Column(scale=1, min_width=300):
+ params_form = create_component(form_radio, comp='Radio')
+ with gr.Accordion("Beam Search") as beams_according:
+ num_beams = create_component(num_beams_slider)
+ repetition_penalty = create_component(repetition_penalty_slider)
+ with gr.Accordion("Sampling") as sampling_according:
+ top_p = create_component(top_p_slider)
+ top_k = create_component(top_k_slider)
+ temperature = create_component(temperature_slider)
+ repetition_penalty_2 = create_component(repetition_penalty_slider2)
+ regenerate = create_component({'value': 'Regenerate'}, comp='Button')
+ with gr.Column(scale=3, min_width=500):
+ app_session = gr.State({'sts':None,'ctx':None,'img':None})
+ bt_pic = gr.Image(label="Upload an image to start")
+ chat_bot = gr.Chatbot(label=f"Chat with {model_name}")
+ txt_message = gr.Textbox(label="Input text")
+
+ regenerate.click(
+ regenerate_button_clicked,
+ [txt_message, chat_bot, app_session, params_form, num_beams, repetition_penalty, repetition_penalty_2, top_p, top_k, temperature],
+ [txt_message, chat_bot, app_session]
+ )
+ txt_message.submit(
+ respond,
+ [txt_message, chat_bot, app_session, params_form, num_beams, repetition_penalty, repetition_penalty_2, top_p, top_k, temperature],
+ [txt_message, chat_bot, app_session]
+ )
+ bt_pic.upload(lambda: None, None, chat_bot, queue=False).then(upload_img, inputs=[bt_pic,chat_bot,app_session], outputs=[chat_bot,app_session])
+
+# launch
+demo.launch(share=False, debug=True, show_api=False, server_port=8080, server_name="0.0.0.0")
+