From dfbc3211ef8e1a82ea12d46b117c9f3b731b0fb1 Mon Sep 17 00:00:00 2001
From: yiranyyu <2606375857@qq.com>
Date: Tue, 4 Jun 2024 11:03:53 +0800
Subject: [PATCH] update readme
---
README.md | 22 +++++++++++++++++++---
README_en.md | 24 ++++++++++++++++++++----
README_zh.md | 22 +++++++++++++++++++---
3 files changed, 58 insertions(+), 10 deletions(-)
diff --git a/README.md b/README.md
index 0b4105c..7e77aef 100644
--- a/README.md
+++ b/README.md
@@ -684,9 +684,25 @@ This project is developed by the following institutions:
## 🌟 Star History
-
-
-
+
+
+
+
+
+
## Citation
diff --git a/README_en.md b/README_en.md
index 8426c3a..7e77aef 100644
--- a/README_en.md
+++ b/README_en.md
@@ -34,7 +34,7 @@
-
+* [2024.06.03] Now, you can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs(12 GB or 16 GB) by distributing the model's layers across multiple GPUs. For more details, Check this [link](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md).
* [2024.05.25] MiniCPM-Llama3-V 2.5 now supports streaming outputs and customized system prompts. Try it [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage)!
* [2024.05.24] We release the MiniCPM-Llama3-V 2.5 [gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](#inference-with-llamacpp) inference and provides a 6~8 token/s smooth decoding on mobile phones. Try it now!
* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first end-side MLLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md). Try it now!
@@ -684,9 +684,25 @@ This project is developed by the following institutions:
## 🌟 Star History
-
-
-
+
+
+
+
+
+
## Citation
diff --git a/README_zh.md b/README_zh.md
index 89f49c8..a0b9a8e 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -716,9 +716,25 @@ python examples/minicpmv_example.py
## 🌟 Star History
-