mirror of
https://github.com/OpenBMB/MiniCPM-V.git
synced 2026-02-05 02:09:20 +08:00
Update README
This commit is contained in:
@@ -136,12 +136,12 @@ MiniCPM-o 2.6 can be easily used in various ways: (1) [llama.cpp](https://github
|
||||
|
||||
**Model Architecture.**
|
||||
|
||||
- **End-to-end Omni-modal Architecture.** Different modality encoder/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge.
|
||||
- **End-to-end Omni-modal Architecture.** Different modality encoder/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge. The model is trained in a fully end-to-end manner with only CE loss.
|
||||
- **Omni-modal Live Streaming Mechanism.** (1) We change the offline modality encoder/decoders into online ones for **streaminig inputs/outputs.** (2) We devise a **time-division multiplexing (TDM) mechanism** for omni-modality streaminig processing in the LLM backbone. It divides parallel omni-modality streams into sequential info within small periodic time slices.
|
||||
- **Configurable Speech Modeling Design.** We devise a multimodal system prompt, including traditional text system prompt, and **a new audio system prompt to determine the assistant voice**. This enables flexible voice configurations in inference time, and also facilitates end-to-end voice cloning and description-based voice creation.
|
||||
|
||||
<div align="center">
|
||||
<img src="./assets/minicpm-o-26-framework.png" , width=80%>
|
||||
<img src="./assets/minicpm-o-26-framework-v2.png" , width=80%>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user