update readme

This commit is contained in:
yiranyyu
2024-05-23 14:09:12 +08:00
parent ca4f01b024
commit 24b6aa1620
2 changed files with 3 additions and 3 deletions

View File

@@ -6,9 +6,9 @@ Comparison results of Phi-3-vision-128K-Instruct and MiniCPM-Llama3-V 2.5, regar
## Hardeware Requirements (硬件需求)
With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference of 6-8 tokens/s on edge devices with only 8GB of GPU memory.
With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference with only 8GB of GPU memory.
通过 in4 量化MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可提供端侧 6-8 tokens/s 的流畅推理。
通过 in4 量化MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。
| Model模型 | GPU Memory显存 |
|:----------------------|:-------------------:|