mirror of
https://github.com/OpenBMB/MiniCPM-V.git
synced 2026-02-05 02:09:20 +08:00
update readme
This commit is contained in:
22
README.md
22
README.md
@@ -46,7 +46,6 @@
|
|||||||
- [Online Demo](#online-demo)
|
- [Online Demo](#online-demo)
|
||||||
- [Install](#install)
|
- [Install](#install)
|
||||||
- [Inference](#inference)
|
- [Inference](#inference)
|
||||||
- [Hardware Requirements](#hardware-requirements)
|
|
||||||
- [Model Zoo](#model-zoo)
|
- [Model Zoo](#model-zoo)
|
||||||
- [Multi-turn Conversation](#multi-turn-conversation)
|
- [Multi-turn Conversation](#multi-turn-conversation)
|
||||||
- [Inference on Mac](#inference-on-mac)
|
- [Inference on Mac](#inference-on-mac)
|
||||||
@@ -473,22 +472,15 @@ pip install -r requirements.txt
|
|||||||
|
|
||||||
## Inference
|
## Inference
|
||||||
|
|
||||||
### Hardware Requirements
|
|
||||||
|
|
||||||
| Model | GPU Memory |
|
|
||||||
|:----------------------|:-------------------:|
|
|
||||||
| MiniCPM-Llama3-V 2.5 | 19 GB |
|
|
||||||
| MiniCPM-Llama3-V 2.5 (int4) | 8 GB |
|
|
||||||
| MiniCPM-Llama3-V 2.0 | 8 GB |
|
|
||||||
|
|
||||||
|
|
||||||
### Model Zoo
|
### Model Zoo
|
||||||
| Model | Description | Download Link |
|
|
||||||
|:----------------------|:-------------------|:---------------:|
|
| Model | GPU Memory |          Description | Download Link |
|
||||||
| MiniCPM-Llama3-V 2.5 | The lastest version, achieving state-of-the end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
|
|:-----------|:-----------:|:-------------------|:---------------:|
|
||||||
| MiniCPM-Llama3-V 2.5 int4 | int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
|
| MiniCPM-Llama3-V 2.5 | 19 GB | The lastest version, achieving state-of-the end-side multimodal performance. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
|
||||||
| MiniCPM-V 2.0 | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
|
| MiniCPM-Llama3-V 2.5 int4 | 8 GB | int4 quantized version,lower GPU memory usage. | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
|
||||||
| MiniCPM-V 1.0 | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
|
| MiniCPM-V 2.0 | 8 GB | Light version, balance the performance the computation cost. | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
|
||||||
|
| MiniCPM-V 1.0 | -| Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
|
||||||
|
|
||||||
### Multi-turn Conversation
|
### Multi-turn Conversation
|
||||||
|
|
||||||
|
|||||||
12
README_zh.md
12
README_zh.md
@@ -487,12 +487,12 @@ pip install -r requirements.txt
|
|||||||
|
|
||||||
### 模型库
|
### 模型库
|
||||||
|
|
||||||
| 模型 | 简介 | 下载链接 |
|
| 模型 | 显存占用 |          简介 | 下载链接 |
|
||||||
|:----------------------|:-------------------|:---------------:|
|
|:--------------|:--------:|:-------------------|:---------------:|
|
||||||
| MiniCPM-Llama3-V 2.5 | 最新版本,提供最佳的端侧多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
|
| MiniCPM-Llama3-V 2.5| 19 GB | 最新版本,提供最佳的端侧多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5) |
|
||||||
| MiniCPM-Llama3-V 2.5 int4 | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
|
| MiniCPM-Llama3-V 2.5 int4 | 8 GB | int4量化版,更低显存占用。 | [🤗](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-Llama3-V-2_5-int4) |
|
||||||
| MiniCPM-V 2.0 | 轻量级版本,平衡计算开销和多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
|
| MiniCPM-V 2.0 | 8 GB | 轻量级版本,平衡计算开销和多模态理解能力。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V-2) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V-2) |
|
||||||
| MiniCPM-V 1.0 | 最轻量版本, 提供最快的推理速度。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
|
| MiniCPM-V 1.0 | - | 最轻量版本, 提供最快的推理速度。 | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
|
||||||
|
|
||||||
更多[历史版本模型](#legacy-models)
|
更多[历史版本模型](#legacy-models)
|
||||||
|
|
||||||
|
|||||||
Reference in New Issue
Block a user