diff --git a/README_en.md b/README_en.md
index f59a9d1..9fdcb43 100644
--- a/README_en.md
+++ b/README_en.md
@@ -614,24 +614,22 @@ MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](htt
### Inference with vLLM
-Click to see how to inference MiniCPM-V 2.0 and MiniCPM-Llama3-V 2.5 with vLLM
-Because our pull request to vLLM is still waiting for reviewing, we fork this repository to build and test our vLLM demo. Here are the steps:
+ vLLM now officially supports MiniCPM-V 2.0 and MiniCPM-Llama3-V 2.5, Click to see.
-1. Clone our version of vLLM:
+1. Clone the official vLLM:
```shell
-git clone https://github.com/OpenBMB/vllm.git
+git clone https://github.com/vllm-project/vllm.git
```
2. Install vLLM:
```shell
cd vllm
-git checkout minicpmv
pip install -e .
```
3. Install timm:
```shell
pip install timm==0.9.10
```
-4. Run our demo:
+4. Run the example:
```shell
python examples/minicpmv_example.py
```
diff --git a/README_zh.md b/README_zh.md
index 2d8895c..17466d4 100644
--- a/README_zh.md
+++ b/README_zh.md
@@ -644,17 +644,15 @@ MiniCPM-Llama3-V 2.5 现在支持llama.cpp啦! 用法请参考我们的fork [lla
### vLLM 部署
-点击查看 MiniCPM-V 2.0 和 MiniCPM-Llama3-V 2.5 利用vLLM 部署运行的方法
-由于我们对 vLLM 提交的 PR 还在 review 中,因此目前我们 fork 了一个 vLLM 仓库以供测试使用。
+点击查看, vLLM 现已官方支持MiniCPM-V 2.0 和 MiniCPM-Llama3-V 2.5
-1. 首先克隆我们 fork 的 vLLM 库:
+1. 首先克隆官方的 vLLM 库:
```shell
-git clone https://github.com/OpenBMB/vllm.git
+git clone https://github.com/vllm-project/vllm.git
```
2. 安装 vLLM 库:
```shell
cd vllm
-git checkout minicpmv
pip install -e .
```
3. 安装 timm 库: