diff --git a/README.md b/README.md index aecd80a..d6c5499 100644 --- a/README.md +++ b/README.md @@ -50,7 +50,6 @@ - [MiniCPM-Llama3-V 2.5](#minicpm-llama3-v-25) - - [Evaluation](#evaluation) - [MiniCPM-V 2.0](#minicpm-v-20) - [Online Demo](#online-demo) - [Install](#install) @@ -86,7 +85,7 @@ - 🚀 **Efficient Deployment.** MiniCPM-Llama3-V 2.5 systematically employs **model quantization, CPU optimizations, NPU optimizations and compilation optimizations**, achieving high-efficiency deployment on edge devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a **150x acceleration in end-side MLLM image encoding** and a **3x speedup in language decoding**. -### Evaluation +### Evaluation
diff --git a/README_en.md b/README_en.md
index aecd80a..be4255a 100644
--- a/README_en.md
+++ b/README_en.md
@@ -50,7 +50,6 @@
- [MiniCPM-Llama3-V 2.5](#minicpm-llama3-v-25)
- - [Evaluation](#evaluation)
- [MiniCPM-V 2.0](#minicpm-v-20)
- [Online Demo](#online-demo)
- [Install](#install)
@@ -86,7 +85,7 @@
- 🚀 **Efficient Deployment.**
MiniCPM-Llama3-V 2.5 systematically employs **model quantization, CPU optimizations, NPU optimizations and compilation optimizations**, achieving high-efficiency deployment on edge devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a **150x acceleration in end-side MLLM image encoding** and a **3x speedup in language decoding**.
-### Evaluation
+### Evaluation
diff --git a/README_zh.md b/README_zh.md
index bcc1a51..99f225e 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -52,8 +52,6 @@
## 目录
- [MiniCPM-Llama3-V 2.5](#minicpm-llama3-v-25)
- - [性能评估](#性能评估)
- - [典型示例](#典型示例)
- [MiniCPM-V 2.0](#minicpm-v-20)
- [Online Demo](#online-demo)
- [安装](#安装)
@@ -92,7 +90,7 @@
-### 性能评估
+### 性能评估
@@ -381,7 +379,7 @@