From ef2b191f25851580f43d50393e53ba0766b12539 Mon Sep 17 00:00:00 2001
From: yiranyyu <2606375857@qq.com>
Date: Thu, 23 May 2024 11:25:30 +0800
Subject: [PATCH] update readme
---
README.md | 7 ++++---
README_zh.md | 1 +
2 files changed, 5 insertions(+), 3 deletions(-)
diff --git a/README.md b/README.md
index ad8336a..d123250 100644
--- a/README.md
+++ b/README.md
@@ -25,6 +25,7 @@
## News
+
* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first edge-side multimodal LLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md), try it now!
* [2024.04.23] MiniCPM-V-2.0 supports vLLM now! Click [here](#vllm) to view more details.
* [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at [here](https://huggingface.co/spaces/openbmb/MiniCPM-V-2)!
@@ -462,7 +463,7 @@ pip install -r requirements.txt
| MiniCPM-V 1.0 | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
### Multi-turn Conversation
-Please refer to the following codes to run `MiniCPM-V` and `OmniLMM`.
+Please refer to the following codes to run `MiniCPM-V`
@@ -618,9 +619,9 @@ Please contact cpm@modelbest.cn to obtain written authorization for commercial u
## Statement
-As LMMs, OmniLMMs generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by OmniLMMs does not represent the views and positions of the model developers
+As LMMs, MiniCPM-V models (including OmniLMM) generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V models does not represent the views and positions of the model developers
-We will not be liable for any problems arising from the use of OmniLMM open source models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
+We will not be liable for any problems arising from the use of MiniCPMV-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
## Institutions
diff --git a/README_zh.md b/README_zh.md
index 29f737c..9fbb527 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -28,6 +28,7 @@
## 更新日志
+
* [2024.05.20] 我们开源了 MiniCPM-Llama3-V 2.5,增强了 OCR 能力,支持 30 多种语言,并首次在端侧实现了 GPT-4V 级的多模态能力!我们提供了[高效推理](#手机端部署)和[简易微调](./finetune/readme.md)的支持,欢迎试用!
* [2024.04.23] 我们增加了对 [vLLM](#vllm) 的支持,欢迎体验!
* [2024.04.18] 我们在 HuggingFace Space 新增了 MiniCPM-V 2.0 的 [demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-2),欢迎体验!