mirror of
https://github.com/OpenBMB/MiniCPM-V.git
synced 2026-02-04 17:59:18 +08:00
update readme
This commit is contained in:
@@ -84,7 +84,7 @@
|
||||
- 🌏 **Multilingual Support.**
|
||||
Thanks to the strong multilingual capabilities of Llama 3 and the cross-lingual generalization technique from [VisCPM](https://github.com/OpenBMB/VisCPM), MiniCPM-Llama3-V 2.5 extends its bilingual (Chinese-English) multimodal capabilities to **over 30 languages including German, French, Spanish, Italian, Korean etc.** [All Supported Languages](./assets/minicpm-llama-v-2-5_languages.md).
|
||||
|
||||
- 👍 **Easy Usage.**
|
||||
- 💫 **Easy Usage.**
|
||||
In response to user demand, we have added the following convenient features: **[ollama](https://github.com/OpenBMB/ollama/tree/minicpm-v2.5/examples/minicpm-v2.5) support** for easy deployment and inference on local machines, 16 **gguf format** quantized [models](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf) for **[llama.cpp](https://github.com/OpenBMB/llama.cpp/blob/minicpm-v2.5/examples/minicpmv/README.md) inference**, **efficient [LoRA fine-tuning](https://github.com/OpenBMB/MiniCPM-V/tree/main/finetune#lora-finetuning)** with just 2 V100 GPUs, and [streaming output](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage) with a simple parameter addition (stream=True). Additionally, we offer interactive demos via [Gradio](https://github.com/OpenBMB/MiniCPM-V/blob/main/web_demo_2.5.py) and [Streamlit](https://github.com/OpenBMB/MiniCPM-V/blob/main/web_demo_streamlit-2_5.py), enabling quick local WebUI setup, and online demon on [HuggingFace Spaces](https://huggingface.co/spaces/openbmb/MiniCPM-Llama3-V-2_5).
|
||||
|
||||
- 🚀 **Efficient Deployment.**
|
||||
|
||||
Reference in New Issue
Block a user