diff --git a/docs/turn_taking_gallery.md b/docs/turn_taking_gallery.md
index d2e2fd1..d8ef62a 100644
--- a/docs/turn_taking_gallery.md
+++ b/docs/turn_taking_gallery.md
@@ -45,6 +45,37 @@ document.querySelectorAll('.tag-button').forEach(button => {
+- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __HumAware VAD__
+{: data-tags="vad-models"}
+
+ ---
+
+ Description
+ **HumAware-VAD** is a fine-tuned version of **Silero-VAD**, specifically trained to **distinguish humming from actual speech**.
+ Standard VAD models often misclassify humming as speech, leading to inaccurate speech segmentation.
+ **HumAware-VAD** improves detection accuracy in environments with background humming, music, and vocal sounds.
+
+ **Install Instructions**
+ ```sh
+ pip install humaware-vad
+ ```
+ **Use with FastRTC**
+ ```sh
+ git clone https://github.com/CuriousMonkey7/HumAwareVad.git
+ cd HumAwareVad
+ python app.py
+ ```
+
+
+
+ [:octicons-arrow-right-24: Demo](https://github.com/CuriousMonkey7/HumAwareVad/blob/main/app.py)
+
+ [:octicons-code-16: Repository](https://github.com/CuriousMonkey7/HumAwareVad/blob/main/humaware_vad/__init__.py)
+
+
+
+
+
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __Walkie Talkie__
{: data-tags="turn-taking-algorithm"}
@@ -141,4 +172,4 @@ In this gallery, you can find a collection of turn-taking algorithms and VAD mod
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
!!! tip "Package Naming Convention"
- It is recommended to name your package `fastrtc-
` so developers can easily find it on [pypi](https://pypi.org/search/?q=fastrtc-).
\ No newline at end of file
+ It is recommended to name your package `fastrtc-` so developers can easily find it on [pypi](https://pypi.org/search/?q=fastrtc-).