From f39a036940fc42a5c5695c8c5af7a68ae0459d50 Mon Sep 17 00:00:00 2001 From: yiranyyu <2606375857@qq.com> Date: Thu, 23 May 2024 12:26:59 +0800 Subject: [PATCH] update readme --- docs/compare_with_phi-3_vision.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/compare_with_phi-3_vision.md b/docs/compare_with_phi-3_vision.md index af7ad50..46f0628 100644 --- a/docs/compare_with_phi-3_vision.md +++ b/docs/compare_with_phi-3_vision.md @@ -6,9 +6,9 @@ Comparison results of Phi-3-vision-128K-Instruct and MiniCPM-Llama3-V 2.5, regar ## Hardeware Requirements (硬件需求) -With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference of 6-8 tokens/s with only 8GB of GPU memory. +With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference of 6-8 tokens/s on edge devices with only 8GB of GPU memory. -通过 in4 量化,MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可提供 6-8 tokens/s 的流畅推理。 +通过 in4 量化,MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可提供端侧 6-8 tokens/s 的流畅推理。 | Model(模型) | GPU Memory(显存) | |:----------------------|:-------------------:| @@ -23,7 +23,7 @@ With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference of 6-8 tok | | Phi-3-vision-128K-Instruct | MiniCPM-Llama3-V 2.5| |:-|:----------:|:-------------------:| | Size(参数) | **4B** | 8B| -| OpenCompass | 53.7 | **58.8** | +| OpenCompass 2024/05 | 53.7 | **58.8** | | OCRBench | 639.0 | **725.0**| | RealworldQA | 58.8 | **63.5**| | TextVQA | 72.2 | **76.6** |