mirror of
https://github.com/OpenBMB/MiniCPM-V.git
synced 2026-02-05 18:29:18 +08:00
update readme
This commit is contained in:
@@ -8,7 +8,7 @@ Comparison results of Phi-3-vision-128K-Instruct and MiniCPM-Llama3-V 2.5, regar
|
|||||||
|
|
||||||
With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference with only 8GB of GPU memory.
|
With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference with only 8GB of GPU memory.
|
||||||
|
|
||||||
通过 in4 量化,MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。
|
通过 int4 量化,MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。
|
||||||
|
|
||||||
| Model(模型) | GPU Memory(显存) |
|
| Model(模型) | GPU Memory(显存) |
|
||||||
|:----------------------|:-------------------:|
|
|:----------------------|:-------------------:|
|
||||||
|
|||||||
Reference in New Issue
Block a user