diff --git a/README.md b/README.md
index 7ae68c8..32db265 100644
--- a/README.md
+++ b/README.md
@@ -55,142 +55,6 @@
We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. The assistant accepts video streams from the camera and speech streams from the microphone and emits speech output. While still primary, we find the model can **replicate some of the fun cases shown in the Gemini Demo video, without any video edition**.
-### Evaluation
-
-
@@ -304,6 +168,110 @@ We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal
†: Proprietary models
+### Examples
+
+
+
+
+
+
+
+
+We combine the OmniLMM-12B and GPT-3.5 (text-only) into a **real-time multimodal interactive assistant**. Video frames are described in text using OmniLMM-12B, and ChatGPT 3.5 (text-only) is employed to generate response according to the descriptions and user prompts. The demo video is a raw recording without edition.
+
+
+
+
+## OmniLMM-3B
+**OmniLMM-3B** (i.e., MiniCPM-V) is an efficient version with promising performance for deployment. The model is built based on SigLip-400M and [MiniCPM-2.4B](https://github.com/OpenBMB/MiniCPM/), connected by a perceiver resampler. Notable features of OmniLMM-3B include:
+
+- ⚡️ **High Efficiency.**
+
+ OmniLMM-3B can be **efficiently deployed on most GPU cards and personal computers**, and **even on end devices such as mobile phones**. In terms of visual encoding, we compress the image representations into 64 tokens via a perceiver resampler, which is significantly fewer than other LMMs based on MLP architecture (typically > 512 tokens). This allows OmniLMM-3B to operate with **much less memory cost and higher speed during inference**.
+
+- 🔥 **Promising Performance.**
+
+ OmniLMM-3B achieves **state-of-the-art performance** on multiple benchmarks (including MMMU, MME, and MMbech, etc) among models with comparable sizes, surpassing existing LMMs built on Phi-2. It even **achieves comparable or better performance than the 9.6B Qwen-VL-Chat**.
+
+- 🙌 **Bilingual Support.**
+
+ OmniLMM-3B is **the first edge-deployable LMM supporting bilingual multimodal interaction in English and Chinese**. This is achieved by generalizing multimodal capabilities across languages, a technique from our ICLR 2024 spotlight [paper](https://arxiv.org/abs/2308.12038).
+
+### Evaluation
+
+
+
+
+
+
+ | Model |
+ Size |
+ MME |
+ MMB dev (en) |
+ MMB dev (zh) |
+ MMMU val |
+ CMMMU val |
+
+
+
+
+ | LLaVA-Phi |
+ 3B |
+ 1335 |
+ 59.8 |
+ - |
+ - |
+ - |
+
+
+ | MobileVLM |
+ 3B |
+ 1289 |
+ 59.6 |
+ - |
+ - |
+ - |
+
+
+ | Imp-v1 |
+ 3B |
+ 1434 |
+ 66.5 |
+ - |
+ - |
+ - |
+
+
+ | Qwen-VL-Chat |
+ 9.6B |
+ 1487 |
+ 60.6 |
+ 56.7 |
+ 35.9 |
+ 30.7 |
+
+
+ | CogVLM |
+ 17.4B |
+ 1438 |
+ 63.7 |
+ 53.8 |
+ 32.1 |
+ - |
+
+
+ | OmniLMM-3B |
+ 3B |
+ 1452 |
+ 67.3 |
+ 61.9 |
+ 34.7 |
+ 32.1 |
+
+
+
+
### Examples
diff --git a/assets/eval_radar.png.png b/assets/eval_radar.png.png
new file mode 100644
index 0000000..18a76f8
Binary files /dev/null and b/assets/eval_radar.png.png differ