update readme

This commit is contained in:
yiranyyu
2024-05-23 16:35:22 +08:00
parent 3896a6fea6
commit cf6ca0cccf

View File

@@ -8,7 +8,7 @@ Comparison results of Phi-3-vision-128K-Instruct and MiniCPM-Llama3-V 2.5, regar
With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference with only 8GB of GPU memory.
通过 in4 量化MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。
通过 int4 量化MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。
| Model模型 | GPU Memory显存 |
|:----------------------|:-------------------:|