diff --git a/docs/minicpm-llama-v-2-5_languages.md b/docs/minicpm-llama-v-2-5_languages.md new file mode 100644 index 0000000..0eae344 --- /dev/null +++ b/docs/minicpm-llama-v-2-5_languages.md @@ -0,0 +1,176 @@ +- English +- 中文 +- 한국어 +- 日本語 +- Deutsch +- Français +- Português +- Español +- မြန်မာဘာသာ +- ไทย +- Tiếng Việt +- Türkçe +- ܣܘܪܝܝܐ +- العربية +- हिन्दी +- বাংলা +- नेपाली +- Türkmençe +- Тоҷикӣ +- Кыргызча +- Русский +- Українська +- Беларуская +- ქართული +- Azərbaycanca +- Հայերեն +- Polski +- Lietuvių +- Eesti +- Latviešu +- Čeština +- Slovenčina +- Magyar +- Slovenščina +- Hrvatski +- Bosanski +- Crnogorski +- Српски +- Shqip +- Română +- Български +- Македонски + + +## 支持语言 + +英语 + +中文 + +韩语 + +日语 + +德语 + +法语 + +葡萄牙语 + +西班牙语 + +缅甸语 + +泰语 + +越南语 + +土耳其语 + +叙利亚语 + +阿拉伯语 + +印地语 + +孟加拉语 + +尼泊尔语 + +土库曼语 + +塔吉克语 + +吉尔吉斯语 + +俄语 + +乌克兰语 + +白俄罗斯语 + +格鲁吉亚语 + +阿塞拜疆语 + +亚美尼亚语 + +波兰语 + +立陶宛语 + +爱沙尼亚语 + +拉脱维亚语 + +捷克语 + +斯洛伐克语 + +匈牙利语 + +斯洛文尼亚语 + +克罗地亚语 + +波斯尼亚语 + +黑山语 + +塞尔维亚语 + +阿尔巴尼亚语 + +罗马尼亚语 + +保加利亚 + +马其顿语 + + + +## Supported Languages + +English +Chinese +Korean +Japanese +German +French +Portuguese +Spanish +Burmese +Thai +Vietnamese +Turkish +Syriac +Arabic +Hindi +Bengali +Nepali +Turkmen +Tajik +Kyrgyz +Russian +Ukrainian +Belarusian +Georgian +Azerbaijani +Armenian +Polish +Lithuanian +Estonian +Latvian +Czech +Slovak +Hungarian +Slovenian +Croatian +Bosnian +Montenegrin +Serbian +Albanian +Romanian +Bulgarian +Macedonian \ No newline at end of file diff --git a/docs/minicpm_llama3_v2dot5.md b/docs/minicpm_llama3_v2dot5.md index 7ab8700..356076b 100644 --- a/docs/minicpm_llama3_v2dot5.md +++ b/docs/minicpm_llama3_v2dot5.md @@ -15,7 +15,7 @@ Leveraging the latest [RLAIF-V](https://github.com/RLHF-V/RLAIF-V/) method (the newest technique in the [RLHF-V](https://github.com/RLHF-V) [CVPR'24] series), MiniCPM-Llama3-V 2.5 exhibits more trustworthy behavior. It achieves a **10.3%** hallucination rate on Object HalBench, lower than GPT-4V-1106 (13.6%), achieving the best-level performance within the open-source community. [Data released](https://huggingface.co/datasets/openbmb/RLAIF-V-Dataset). - 🌏 **Multilingual Support.** - Thanks to the strong multilingual capabilities of Llama 3 and the cross-lingual generalization technique from [VisCPM](https://github.com/OpenBMB/VisCPM), MiniCPM-Llama3-V 2.5 extends its bilingual (Chinese-English) multimodal capabilities to **over 30 languages including German, French, Spanish, Italian, Korean etc.** [All Supported Languages](./assets/minicpm-llama-v-2-5_languages.md). + Thanks to the strong multilingual capabilities of Llama 3 and the cross-lingual generalization technique from [VisCPM](https://github.com/OpenBMB/VisCPM), MiniCPM-Llama3-V 2.5 extends its bilingual (Chinese-English) multimodal capabilities to **over 30 languages including German, French, Spanish, Italian, Korean etc.** [All Supported Languages](../docs/minicpm-llama-v-2-5_languages.md). - 🚀 **Efficient Deployment.** MiniCPM-Llama3-V 2.5 systematically employs **model quantization, CPU optimizations, NPU optimizations and compilation optimizations**, achieving high-efficiency deployment on end-side devices. For mobile phones with Qualcomm chips, we have integrated the NPU acceleration framework QNN into llama.cpp for the first time. After systematic optimization, MiniCPM-Llama3-V 2.5 has realized a **150x acceleration in end-side MLLM image encoding** and a **3x speedup in language decoding**.