diff --git a/README.md b/README.md index 7ae68c8..32db265 100644 --- a/README.md +++ b/README.md @@ -55,142 +55,6 @@ We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can **replicate some of the fun cases shown in the Gemini Demo video, without any video edition**. -### Evaluation - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
ModelSizeMMEMMB dev (en)MMMU valMMHal-BenchObject HalBenchSeedBench-IMathVistaLLaVA Bench W
GPT-4V†-140975.1 56.83.53 / 70.886.4 / 92.771.6 47.8 93.1
Qwen-VL-Plus†-168166.2 45.2- - 65.7 36.0 73.7
Yi-VL 6B6.7B - 68.2 39.1 - - 66.1 28.0 39.9
Qwen-VL-Chat9.6B148860.6 35.92.93 / 59.456.2 / 80.064.8 33.8 67.7
CogVLM17.4B143863.7 32.1 2.68 / 52.1 73.6 / 87.4 68.8 34.7 73.9
LLaVA 1.513.6B 1531 68.2 36.4 2.71 / 51.0 53.7 / 77.4 68.1 26.4 64.6
OmniLMM-12B11.6B 1637 71.6 40.7 3.45 / 68.8 90.3 / 95.5 71.1 34.9 72.0
-†: Proprietary models - -### Examples - - -

- -

-
- - -We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. Video frames are described in text using OmniLMM-12B, and ChatGPT 3.5 (text-only) is employed to generate response according to the descriptions and user prompts. The demo video is a raw recording without edition. - -
-
- -## OmniLMM-3B -**OmniLMM-3B** (i.e., MiniCPM-V) is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of OmniLMM-3B include: - -- ⚡️ **High Efficiency.** - - OmniLMM-3B can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows OmniLMM-3B to operate with **much less memory cost and higher speed during inference**. - -- 🔥 **Promising Performance.** - - OmniLMM-3B achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**. - -- 🙌 **Bilingual Support.** - - OmniLMM-3B is **the first edge-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from our ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038). - ### Evaluation
@@ -304,6 +168,110 @@ We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal †: Proprietary models +### Examples + + +

+ +

+
+ + +We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. Video frames are described in text using OmniLMM-12B, and ChatGPT 3.5 (text-only) is employed to generate response according to the descriptions and user prompts. The demo video is a raw recording without edition. + +
+
+ +## OmniLMM-3B +**OmniLMM-3B** (i.e., MiniCPM-V) is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of OmniLMM-3B include: + +- ⚡️ **High Efficiency.** + + OmniLMM-3B can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows OmniLMM-3B to operate with **much less memory cost and higher speed during inference**. + +- 🔥 **Promising Performance.** + + OmniLMM-3B achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**. + +- 🙌 **Bilingual Support.** + + OmniLMM-3B is **the first edge-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from our ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038). + +### Evaluation + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
ModelSizeMMEMMB dev (en)MMB dev (zh)MMMU valCMMMU val
LLaVA-Phi3B133559.8- - -
MobileVLM3B128959.6- - -
Imp-v13B143466.5- - -
Qwen-VL-Chat9.6B148760.6 56.7 35.9 30.7
CogVLM17.4B 1438 63.7 53.8 32.1 -
OmniLMM-3B3B 1452 67.3 61.9 34.7 32.1
+
### Examples diff --git a/assets/eval_radar.png.png b/assets/eval_radar.png.png new file mode 100644 index 0000000..18a76f8 Binary files /dev/null and b/assets/eval_radar.png.png differ