From 24b6aa1620cb156d5f434a96ebd97f5dc3e1825e Mon Sep 17 00:00:00 2001
From: yiranyyu <2606375857@qq.com>
Date: Thu, 23 May 2024 14:09:12 +0800
Subject: [PATCH] update readme
---
README.md | 2 +-
docs/compare_with_phi-3_vision.md | 4 ++--
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/README.md b/README.md
index b73b2f9..75aa4d5 100644
--- a/README.md
+++ b/README.md
@@ -30,7 +30,7 @@
* [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at [here](https://huggingface.co/spaces/openbmb/MiniCPM-V-2)!
* [2024.04.17] MiniCPM-V-2.0 supports deploying [WebUI Demo](#webui-demo) now!
* [2024.04.15] MiniCPM-V-2.0 now also supports [fine-tuning](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v-2最佳实践.md) with the SWIFT framework!
-* [2024.04.12] We open-source MiniCPM-V-2.0, which achieves comparable performance with Gemini Pro in understanding scene text and outperforms strong Qwen-VL-Chat 9.6B and Yi-VL 34B on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. Click here to view the MiniCPM-V 2.0 technical blog.
+* [2024.04.12] We open-source MiniCPM-V 2.0, which achieves comparable performance with Gemini Pro in understanding scene text and outperforms strong Qwen-VL-Chat 9.6B and Yi-VL 34B on OpenCompass, a comprehensive evaluation over 11 popular benchmarks. Click here to view the MiniCPM-V 2.0 technical blog.
* [2024.03.14] MiniCPM-V now supports [fine-tuning](https://github.com/modelscope/swift/blob/main/docs/source/Multi-Modal/minicpm-v最佳实践.md) with the SWIFT framework. Thanks to [Jintao](https://github.com/Jintao-Huang) for the contribution!
* [2024.03.01] MiniCPM-V now can be deployed on Mac!
* [2024.02.01] We open-source MiniCPM-V and OmniLMM-12B, which support efficient end-side deployment and powerful multimodal capabilities correspondingly.
diff --git a/docs/compare_with_phi-3_vision.md b/docs/compare_with_phi-3_vision.md
index 46f0628..b6b9d58 100644
--- a/docs/compare_with_phi-3_vision.md
+++ b/docs/compare_with_phi-3_vision.md
@@ -6,9 +6,9 @@ Comparison results of Phi-3-vision-128K-Instruct and MiniCPM-Llama3-V 2.5, regar
## Hardeware Requirements (硬件需求)
-With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference of 6-8 tokens/s on edge devices with only 8GB of GPU memory.
+With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference with only 8GB of GPU memory.
-通过 in4 量化,MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可提供端侧 6-8 tokens/s 的流畅推理。
+通过 in4 量化,MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。
| Model(模型) | GPU Memory(显存) |
|:----------------------|:-------------------:|