Update README

This commit is contained in:
yiranyyu
2025-01-15 18:02:17 +08:00
parent b178622f73
commit e1e04af112
3 changed files with 4 additions and 4 deletions

View File

@@ -136,12 +136,12 @@ MiniCPM-o 2.6 can be easily used in various ways: (1) [llama.cpp](https://github
**Model Architecture.**
- **End-to-end Omni-modal Architecture.** Different modality encoder/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge.
- **End-to-end Omni-modal Architecture.** Different modality encoder/decoders are connected and trained in an **end-to-end** fashion to fully exploit rich multimodal knowledge. The model is trained in a fully end-to-end manner with only CE loss.
- **Omni-modal Live Streaming Mechanism.** (1) We change the offline modality encoder/decoders into online ones for **streaminig inputs/outputs.** (2) We devise a **time-division multiplexing (TDM) mechanism** for omni-modality streaminig processing in the LLM backbone. It divides parallel omni-modality streams into sequential info within small periodic time slices.
- **Configurable Speech Modeling Design.** We devise a multimodal system prompt, including traditional text system prompt, and **a new audio system prompt to determine the assistant voice**. This enables flexible voice configurations in inference time, and also facilitates end-to-end voice cloning and description-based voice creation.
<div align="center">
<img src="./assets/minicpm-o-26-framework.png" , width=80%>
<img src="./assets/minicpm-o-26-framework-v2.png" , width=80%>
</div>