This commit is contained in:
yiranyyu
2024-05-24 11:58:41 +08:00
parent c0c5581f22
commit 7369d5e003

View File

@@ -588,7 +588,7 @@ PYTORCH_ENABLE_MPS_FALLBACK=1 python web_demo_2.5.py --device mps
</details>
### Inference with llama.cpp<a id="inference-with-llamacpp"></a>
MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail. This implementation supports smooth inference of 6~8 token/s on mobile phone<sup>1</sup>.
MiniCPM-Llama3-V 2.5 can run with llama.cpp now! See our fork of [llama.cpp](https://github.com/OpenBMB/llama.cpp/tree/minicpm-v2.5/examples/minicpmv) for more detail. This implementation supports smooth inference of 6~8 token/s on mobile phones<sup>1</sup>.
<small>
1. Test environmentXiaomi 14 pro + Snapdragon 8 Gen 3