Files
MiniCPM-o/docs/compare_with_phi-3_vision.md
2024-05-23 14:09:12 +08:00

31 lines
1.3 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

## Phi-3-vision-128K-Instruct vs MiniCPM-Llama3-V 2.5
Comparison results of Phi-3-vision-128K-Instruct and MiniCPM-Llama3-V 2.5, regarding the model size, hardware requirements, and performances on multiple popular benchmarks.
我们提供了从模型参数、硬件需求、全面性能指标等方面对比 Phi-3-vision-128K-Instruct 和 MiniCPM-Llama3-V 2.5 的结果。
## Hardeware Requirements (硬件需求)
With in4 quantization, MiniCPM-Llama3-V 2.5 delivers smooth inference with only 8GB of GPU memory.
通过 in4 量化MiniCPM-Llama3-V 2.5 仅需 8GB 显存即可推理。
| Model模型 | GPU Memory显存 |
|:----------------------|:-------------------:|
| [MiniCPM-Llama3-V 2.5](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5/) | 19 GB |
| Phi-3-vision-128K-Instruct | 12 GB |
| [MiniCPM-Llama3-V 2.5 (int4)](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-int4/) | 8 GB |
## Model Size and Peformance (模型参数和性能)
| | Phi-3-vision-128K-Instruct | MiniCPM-Llama3-V 2.5|
|:-|:----------:|:-------------------:|
| Size参数 | **4B** | 8B|
| OpenCompass 2024/05 | 53.7 | **58.8** |
| OCRBench | 639.0 | **725.0**|
| RealworldQA | 58.8 | **63.5**|
| TextVQA | 72.2 | **76.6** |
| ScienceQA| **90.8** | 89.0 |
| POPE | 83.4 | **87.2** |