update readme

This commit is contained in:
yiranyyu
2024-06-04 11:03:53 +08:00
parent e95b488cfe
commit dfbc3211ef
3 changed files with 58 additions and 10 deletions

View File

@@ -34,7 +34,7 @@
<br>
* [2024.06.03] Now, you can run MiniCPM-Llama3-V 2.5 on multiple low VRAM GPUs(12 GB or 16 GB) by distributing the model's layers across multiple GPUs. For more details, Check this [link](https://github.com/OpenBMB/MiniCPM-V/blob/main/docs/inference_on_multiple_gpus.md).
* [2024.05.25] MiniCPM-Llama3-V 2.5 now supports streaming outputs and customized system prompts. Try it [here](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5#usage)!
* [2024.05.24] We release the MiniCPM-Llama3-V 2.5 [gguf](https://huggingface.co/openbmb/MiniCPM-Llama3-V-2_5-gguf), which supports [llama.cpp](#inference-with-llamacpp) inference and provides a 6~8 token/s smooth decoding on mobile phones. Try it now!
* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first end-side MLLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md). Try it now!
@@ -684,9 +684,25 @@ This project is developed by the following institutions:
## 🌟 Star History
<div>
<img src="./assets/Star-History.png" width="500em" ></img>
</div>
<picture>
<source
media="(prefers-color-scheme: dark)"
srcset="
https://api.star-history.com/svg?repos=OpenBMB/MiniCPM-V&type=Date&theme=dark
"
/>
<source
media="(prefers-color-scheme: light)"
srcset="
https://api.star-history.com/svg?repos=OpenBMB/MiniCPM-V&type=Date
"
/>
<img
alt="Star History Chart"
src="https://api.star-history.com/svg?repos=OpenBMB/MiniCPM-V&type=Date"
/>
</picture>
## Citation