mirror of
https://github.com/OpenBMB/MiniCPM-V.git
synced 2026-02-04 17:59:18 +08:00
update readme
This commit is contained in:
@@ -25,6 +25,7 @@
|
|||||||
|
|
||||||
## News <!-- omit in toc -->
|
## News <!-- omit in toc -->
|
||||||
|
|
||||||
|
<!-- * [2024.05.22] We further improved the inference efficiency on edge-side devices, providing a speed of 6-8 tokens/s, try it now! -->
|
||||||
* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first edge-side multimodal LLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md), try it now!
|
* [2024.05.20] We open-soure MiniCPM-Llama3-V 2.5, it has improved OCR capability and supports 30+ languages, representing the first edge-side multimodal LLM achieving GPT-4V level performance! We provide [efficient inference](#deployment-on-mobile-phone) and [simple fine-tuning](./finetune/readme.md), try it now!
|
||||||
* [2024.04.23] MiniCPM-V-2.0 supports vLLM now! Click [here](#vllm) to view more details.
|
* [2024.04.23] MiniCPM-V-2.0 supports vLLM now! Click [here](#vllm) to view more details.
|
||||||
* [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at [here](https://huggingface.co/spaces/openbmb/MiniCPM-V-2)!
|
* [2024.04.18] We create a HuggingFace Space to host the demo of MiniCPM-V 2.0 at [here](https://huggingface.co/spaces/openbmb/MiniCPM-V-2)!
|
||||||
@@ -462,7 +463,7 @@ pip install -r requirements.txt
|
|||||||
| MiniCPM-V 1.0 | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
|
| MiniCPM-V 1.0 | Lightest version, achieving the fastest inference. | [🤗](https://huggingface.co/openbmb/MiniCPM-V) [<img src="./assets/modelscope_logo.png" width="20px"></img>](https://modelscope.cn/models/OpenBMB/MiniCPM-V) |
|
||||||
|
|
||||||
### Multi-turn Conversation
|
### Multi-turn Conversation
|
||||||
Please refer to the following codes to run `MiniCPM-V` and `OmniLMM`.
|
Please refer to the following codes to run `MiniCPM-V`
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="assets/airplane.jpeg" width="500px">
|
<img src="assets/airplane.jpeg" width="500px">
|
||||||
@@ -618,9 +619,9 @@ Please contact cpm@modelbest.cn to obtain written authorization for commercial u
|
|||||||
|
|
||||||
## Statement <!-- omit in toc -->
|
## Statement <!-- omit in toc -->
|
||||||
|
|
||||||
As LMMs, OmniLMMs generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by OmniLMMs does not represent the views and positions of the model developers
|
As LMMs, MiniCPM-V models (including OmniLMM) generate contents by learning a large amount of multimodal corpora, but they cannot comprehend, express personal opinions or make value judgement. Anything generated by MiniCPM-V models does not represent the views and positions of the model developers
|
||||||
|
|
||||||
We will not be liable for any problems arising from the use of OmniLMM open source models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
|
We will not be liable for any problems arising from the use of MiniCPMV-V models, including but not limited to data security issues, risk of public opinion, or any risks and problems arising from the misdirection, misuse, dissemination or misuse of the model.
|
||||||
|
|
||||||
|
|
||||||
## Institutions <!-- omit in toc -->
|
## Institutions <!-- omit in toc -->
|
||||||
|
|||||||
@@ -28,6 +28,7 @@
|
|||||||
|
|
||||||
## 更新日志 <!-- omit in toc -->
|
## 更新日志 <!-- omit in toc -->
|
||||||
|
|
||||||
|
<!-- * [2024.05.22] 我们进一步提升了端侧推理速度!实现了 6-8 tokens/s 的流畅体验,欢迎试用! -->
|
||||||
* [2024.05.20] 我们开源了 MiniCPM-Llama3-V 2.5,增强了 OCR 能力,支持 30 多种语言,并首次在端侧实现了 GPT-4V 级的多模态能力!我们提供了[高效推理](#手机端部署)和[简易微调](./finetune/readme.md)的支持,欢迎试用!
|
* [2024.05.20] 我们开源了 MiniCPM-Llama3-V 2.5,增强了 OCR 能力,支持 30 多种语言,并首次在端侧实现了 GPT-4V 级的多模态能力!我们提供了[高效推理](#手机端部署)和[简易微调](./finetune/readme.md)的支持,欢迎试用!
|
||||||
* [2024.04.23] 我们增加了对 [vLLM](#vllm) 的支持,欢迎体验!
|
* [2024.04.23] 我们增加了对 [vLLM](#vllm) 的支持,欢迎体验!
|
||||||
* [2024.04.18] 我们在 HuggingFace Space 新增了 MiniCPM-V 2.0 的 [demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-2),欢迎体验!
|
* [2024.04.18] 我们在 HuggingFace Space 新增了 MiniCPM-V 2.0 的 [demo](https://huggingface.co/spaces/openbmb/MiniCPM-V-2),欢迎体验!
|
||||||
|
|||||||
Reference in New Issue
Block a user