From ffa1e24a6cb06a56e18038481892c7cb4b8d4999 Mon Sep 17 00:00:00 2001
From: Alphi <52458637+HwwwwwwwH@users.noreply.github.com>
Date: Fri, 19 Jul 2024 15:08:05 +0800
Subject: [PATCH] Update README.md
---
README.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/README.md b/README.md
index e46b405..110dce4 100644
--- a/README.md
+++ b/README.md
@@ -614,7 +614,7 @@ MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](htt
### Inference with vLLM
-Click to see how to inference MiniCPM-V 2.0 with vLLM (MiniCPM-Llama3-V 2.5 coming soon)
+Click to see how to inference MiniCPM-V 2.0 and MiniCPM-Llama3-V 2.5 with vLLM
Because our pull request to vLLM is still waiting for reviewing, we fork this repository to build and test our vLLM demo. Here are the steps:
1. Clone our version of vLLM:
@@ -624,6 +624,7 @@ git clone https://github.com/OpenBMB/vllm.git
2. Install vLLM:
```shell
cd vllm
+git checkout minicpmv
pip install -e .
```
3. Install timm: