mirror of
https://github.com/OpenBMB/MiniCPM-V.git
synced 2026-02-04 17:59:18 +08:00
update readmeg
This commit is contained in:
@@ -25,7 +25,7 @@
|
||||
|
||||
## News <!-- omit in toc -->
|
||||
|
||||
* [2024.05.23] 🔍🔍🔍 We've released a comprehensive comparison between Phi-3-vision-128k-instruct and MiniCPM-Llama3-V 2.5, including benchmarks evaluations, and multilingual capabilities 🌟📊🌍. Click [here](./docs/compare_with_phi-3_vision.md) to view more details.
|
||||
* [2024.05.23] 🔍 We've released a comprehensive comparison between Phi-3-vision-128k-instruct and MiniCPM-Llama3-V 2.5, including benchmarks evaluations, and multilingual capabilities 🌟📊🌍. Click [here](./docs/compare_with_phi-3_vision.md) to view more details.
|
||||
* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first end-side MLLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md). Try it now!
|
||||
* [2024.04.23] MiniCPM-V-2.0 supports vLLM now! Click [here](#vllm) to view more details.
|
||||
* [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at [here](https://huggingface.co/spaces/openbmb/MiniCPM-V-2)!
|
||||
@@ -491,7 +491,8 @@ pip install -r requirements.txt
|
||||
| MiniCPM-V 1.0 | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
|
||||
|
||||
### Multi-turn Conversation
|
||||
Please refer to the following codes to run `MiniCPM-V` and `OmniLMM`.
|
||||
|
||||
Please refer to the following codes to run.
|
||||
|
||||
<div align="center">
|
||||
<img src="assets/airplane.jpeg" width="500px">
|
||||
|
||||
@@ -28,7 +28,7 @@
|
||||
|
||||
## 更新日志 <!-- omit in toc -->
|
||||
|
||||
* [2024.05.23] 🔍🔍🔍 我们添加了Phi-3-vision-128k-instruct与MiniCPM-Llama3-V 2.5的全面对比,包括基准测试评估和多语言能力 🌟📊🌍。点击[这里](./docs/compare_with_phi-3_vision.md)查看详细信息。
|
||||
* [2024.05.23] 🔍 我们添加了Phi-3-vision-128k-instruct与MiniCPM-Llama3-V 2.5的全面对比,包括基准测试评估和多语言能力 🌟📊🌍。点击[这里](./docs/compare_with_phi-3_vision.md)查看详细信息。
|
||||
<!-- * [2024.05.22] 我们进一步提升了端侧推理速度!实现了 6-8 tokens/s 的流畅体验,欢迎试用! -->
|
||||
* [2024.05.20] 我们开源了 MiniCPM-Llama3-V 2.5,增强了 OCR 能力,支持 30 多种语言,并首次在端侧实现了 GPT-4V 级的多模态能力!我们提供了[高效推理](#手机端部署)和[简易微调](./finetune/readme.md)的支持,欢迎试用!
|
||||
* [2024.04.23] 我们增加了对 [vLLM](#vllm) 的支持,欢迎体验!
|
||||
|
||||
Reference in New Issue
Block a user