From 4e76fdb3bceb4c9f91af9ad24530bb67c4187145 Mon Sep 17 00:00:00 2001
From: yiranyyu <2606375857@qq.com>
Date: Fri, 24 May 2024 14:01:48 +0800
Subject: [PATCH] update readme
---
README.md | 5 +++--
README_zh.md | 1 +
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index cb02168..8c9ab92 100644
--- a/README.md
+++ b/README.md
@@ -477,10 +477,11 @@ pip install -r requirements.txt
### Model Zoo
-| Model | GPU Memory | Description | Download Link |
+| Model | Memory | Description | Download |
|:-----------|:-----------:|:-------------------|:---------------:|
| MiniCPM-Llama3-V 2.5 | 19 GB | The lastest version, achieving state-of-the end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
-| MiniCPM-Llama3-V 2.5 int4 | 8 GB | int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
+| MiniCPM-Llama3-V 2.5 gguf | 5 GB | The gguf version, lower GPU memory and faster inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) |
+| MiniCPM-Llama3-V 2.5 int4 | 8 GB | The int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
| MiniCPM-V 2.0 | 8 GB | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
| MiniCPM-V 1.0 | 7 GB | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
diff --git a/README_zh.md b/README_zh.md
index fbca106..2853caf 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -491,6 +491,7 @@ pip install -r requirements.txt
| 模型 | 显存占用 | 简介 | 下载链接 |
|:--------------|:--------:|:-------------------|:---------------:|
| MiniCPM-Llama3-V 2.5| 19 GB | 最新版本,提供最佳的端侧多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
+| MiniCPM-Llama3-V 2.5 gguf | 5 GB | gguf 版本,更低的内存占用和更高的推理效率。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) |
| MiniCPM-Llama3-V 2.5 int4 | 8 GB | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
| MiniCPM-V 2.0 | 8 GB | 轻量级版本,平衡计算开销和多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
| MiniCPM-V 1.0 | 7 GB | 最轻量版本, 提供最快的推理速度。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |