From 980a1e37e2c29cda72ebd354fcf2dadd69fbd352 Mon Sep 17 00:00:00 2001
From: yiranyyu <2606375857@qq.com>
Date: Thu, 23 May 2024 19:18:39 +0800
Subject: [PATCH] update readme
---
README.md | 22 +++++++---------------
README_zh.md | 12 ++++++------
2 files changed, 13 insertions(+), 21 deletions(-)
diff --git a/README.md b/README.md
index 828f429..168dda1 100644
--- a/README.md
+++ b/README.md
@@ -46,7 +46,6 @@
- [Online Demo](#online-demo)
- [Install](#install)
- [Inference](#inference)
- - [Hardware Requirements](#hardware-requirements)
- [Model Zoo](#model-zoo)
- [Multi-turn Conversation](#multi-turn-conversation)
- [Inference on Mac](#inference-on-mac)
@@ -473,22 +472,15 @@ pip install -r requirements.txt
## Inference
-### Hardware Requirements
-
-| Model | GPU Memory |
-|:----------------------|:-------------------:|
-| MiniCPM-Llama3-V 2.5 | 19 GB |
-| MiniCPM-Llama3-V 2.5 (int4) | 8 GB |
-| MiniCPM-Llama3-V 2.0 | 8 GB |
-
### Model Zoo
-| Model | Description | Download Link |
-|:----------------------|:-------------------|:---------------:|
-| MiniCPM-Llama3-V 2.5 | The lastest version, achieving state-of-the end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
-| MiniCPM-Llama3-V 2.5 int4 | int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
-| MiniCPM-V 2.0 | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
-| MiniCPM-V 1.0 | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
+
+| Model | GPU Memory | Description | Download Link |
+|:-----------|:-----------:|:-------------------|:---------------:|
+| MiniCPM-Llama3-V 2.5 | 19 GB | The lastest version, achieving state-of-the end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
+| MiniCPM-Llama3-V 2.5 int4 | 8 GB | int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
+| MiniCPM-V 2.0 | 8 GB | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
+| MiniCPM-V 1.0 | -| Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
### Multi-turn Conversation
diff --git a/README_zh.md b/README_zh.md
index 898dc10..cb22cff 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -487,12 +487,12 @@ pip install -r requirements.txt
### 模型库
-| 模型 | 简介 | 下载链接 |
-|:----------------------|:-------------------|:---------------:|
-| MiniCPM-Llama3-V 2.5 | 最新版本,提供最佳的端侧多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
-| MiniCPM-Llama3-V 2.5 int4 | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
-| MiniCPM-V 2.0 | 轻量级版本,平衡计算开销和多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
-| MiniCPM-V 1.0 | 最轻量版本, 提供最快的推理速度。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
+| 模型 | 显存占用 | 简介 | 下载链接 |
+|:--------------|:--------:|:-------------------|:---------------:|
+| MiniCPM-Llama3-V 2.5| 19 GB | 最新版本,提供最佳的端侧多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
+| MiniCPM-Llama3-V 2.5 int4 | 8 GB | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
+| MiniCPM-V 2.0 | 8 GB | 轻量级版本,平衡计算开销和多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
+| MiniCPM-V 1.0 | - | 最轻量版本, 提供最快的推理速度。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [
](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
更多[历史版本模型](#legacy-models)