diff --git a/README_en.md b/README_en.md
index c0ff84e..3c6d7a6 100644
--- a/README_en.md
+++ b/README_en.md
@@ -51,6 +51,7 @@
- [Inference on Mac](#inference-on-mac)
- [Deployment on Mobile Phone](#deployment-on-mobile-phone)
- [WebUI Demo](#webui-demo)
+ - [Inference with llama.cpp](#llamacpp)
- [Inference with vLLM](#inference-with-vllm)
- [Fine-tuning](#fine-tuning)
- [TODO](#todo)
@@ -585,6 +586,9 @@ PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
```
+### Inference with llama.cpp
+MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail.
+
### Inference with vLLM