From cf6ca0cccfb7996fcb2327f3aa1ecc4cee497405 Mon Sep 17 00:00:00 2001 From: yiranyyu <2606375857@qq.com> Date: Thu, 23 May 2024 16:35:22 +0800 Subject: [PATCH] update readme --- docs/compare_with_phi-3_vision.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/compare_with_phi-3_vision.md b/docs/compare_with_phi-3_vision.md index 1a025e4..7c2ac77 100644 --- a/docs/compare_with_phi-3_vision.md +++ b/docs/compare_with_phi-3_vision.md @@ -8,7 +8,7 @@ Comparison results of Phi-3-vision-128K-Instruct and MiniCPM-Llama3-V 2.5, regar With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference with only 8GB of GPU memory. -通过 in4 量化,MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。 +通过 int4 量化,MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。 | Model(模型) | GPU Memory(显存) | |:----------------------|:-------------------:|