From 50a68c9803d9a948334efe440659ed216d38ff3b Mon Sep 17 00:00:00 2001 From: yiranyyu <2606375857@qq.com> Date: Thu, 23 May 2024 18:32:35 +0800 Subject: [PATCH] update readme --- docs/compare_with_phi-3_vision.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/compare_with_phi-3_vision.md b/docs/compare_with_phi-3_vision.md index 5611e33..4001661 100644 --- a/docs/compare_with_phi-3_vision.md +++ b/docs/compare_with_phi-3_vision.md @@ -18,9 +18,9 @@ With int4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference with only ## Model Size and Peformance (模型参数和性能) -In most benchmarks, MiniCPM-Llama3-V 2.5 achieves **better performance** compared with Phi-3-vision-128K-Instruct. +In most benchmarks, MiniCPM-Llama3-V 2.5 achieves **better performance** compared with Phi-3-vision-128K-Instruct. Moreover, MiniCPM-Llama3-V 2.5 also exhibits **lower latency and better throughtput even without quantization**. -在大多数评测集上, MiniCPM-Llama3-V 2.5 相比于 Phi-3-vision-128K-Instruct 都展现出了**更优的性能表现**. +在大多数评测集上, MiniCPM-Llama3-V 2.5 相比于 Phi-3-vision-128K-Instruct 都展现出了**更优的性能表现**。 即使未经量化,MiniCPM-Llama3-V 2.5 的**推理延迟和吞吐率也都更具优势**。 | | Phi-3-vision-128K-Instruct | MiniCPM-Llama3-V 2.5| |:-|:----------:|:-------------------:|