update v0.2.0

This commit is contained in:
杍超
2025-04-01 16:04:53 +08:00
198 changed files with 27674 additions and 2392 deletions

1
docs/CNAME Normal file
View File

@@ -0,0 +1 @@
fastrtc.org

View File

@@ -0,0 +1 @@
<?xml version="1.0" encoding="UTF-8"?><svg id="Discord-Logo" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 126.644 96"><defs><style>.cls-1{fill:#fff;}</style></defs><path id="Discord-Symbol-White" class="cls-1" d="M81.15,0c-1.2376,2.1973-2.3489,4.4704-3.3591,6.794-9.5975-1.4396-19.3718-1.4396-28.9945,0-.985-2.3236-2.1216-4.5967-3.3591-6.794-9.0166,1.5407-17.8059,4.2431-26.1405,8.0568C2.779,32.5304-1.6914,56.3725.5312,79.8863c9.6732,7.1476,20.5083,12.603,32.0505,16.0884,2.6014-3.4854,4.8998-7.1981,6.8698-11.0623-3.738-1.3891-7.3497-3.1318-10.8098-5.1523.9092-.6567,1.7932-1.3386,2.6519-1.9953,20.281,9.547,43.7696,9.547,64.0758,0,.8587.7072,1.7427,1.3891,2.6519,1.9953-3.4601,2.0457-7.0718,3.7632-10.835,5.1776,1.97,3.8642,4.2683,7.5769,6.8698,11.0623,11.5419-3.4854,22.3769-8.9156,32.0509-16.0631,2.626-27.2771-4.496-50.9172-18.817-71.8548C98.9811,4.2684,90.1918,1.5659,81.1752.0505l-.0252-.0505ZM42.2802,65.4144c-6.2383,0-11.4159-5.6575-11.4159-12.6535s4.9755-12.6788,11.3907-12.6788,11.5169,5.708,11.4159,12.6788c-.101,6.9708-5.026,12.6535-11.3907,12.6535ZM84.3576,65.4144c-6.2637,0-11.3907-5.6575-11.3907-12.6535s4.9755-12.6788,11.3907-12.6788,11.4917,5.708,11.3906,12.6788c-.101,6.9708-5.026,12.6535-11.3906,12.6535Z"/></svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

View File

@@ -1,3 +1,5 @@
Any of the parameters for the `Stream` class can be passed to the [`WebRTC`](../userguide/gradio) component directly.
## Track Constraints
You can specify the `track_constraints` parameter to control how the data is streamed to the server. The full documentation on track constraints is [here](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints#constraints).
@@ -10,21 +12,21 @@ track_constraints = {
"height": {"exact": 500},
"frameRate": {"ideal": 30},
}
webrtc = WebRTC(track_constraints=track_constraints,
modality="video",
mode="send-receive")
webrtc = Stream(
handler=...,
track_constraints=track_constraints,
modality="video",
mode="send-receive")
```
!!! warning
WebRTC may not enforce your constaints. For example, it may rescale your video
(while keeping the same resolution) in order to maintain the desired (or reach a better) frame rate. If you
really want to enforce height, width and resolution constraints, use the `rtp_params` parameter as set `"degradationPreference": "maintain-resolution"`.
WebRTC may not enforce your constraints. For example, it may rescale your video
(while keeping the same resolution) in order to maintain the desired frame rate (or reach a better one). If you really want to enforce height, width and resolution constraints, use the `rtp_params` parameter as set `"degradationPreference": "maintain-resolution"`.
```python
image = WebRTC(
label="Stream",
image = Stream(
modality="video",
mode="send",
track_constraints=track_constraints,
rtp_params={"degradationPreference": "maintain-resolution"}
@@ -36,7 +38,8 @@ webrtc = WebRTC(track_constraints=track_constraints,
You can configure how the connection is created on the client by passing an `rtc_configuration` parameter to the `WebRTC` component constructor.
See the list of available arguments [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/RTCPeerConnection#configuration).
When deploying on a remote server, an `rtc_configuration` parameter must be passed in. See [Deployment](/deployment).
!!! warning
When deploying on a remote server, the `rtc_configuration` parameter must be passed in. See [Deployment](../deployment).
## Reply on Pause Voice-Activity-Detection
@@ -50,58 +53,52 @@ The `ReplyOnPause` class runs a Voice Activity Detection (VAD) algorithm to dete
The following parameters control this argument:
```python
from gradio_webrtc import AlgoOptions, ReplyOnPause, WebRTC
from fastrtc import AlgoOptions, ReplyOnPause, Stream
options = AlgoOptions(audio_chunk_duration=0.6, # (1)
started_talking_threshold=0.2, # (2)
speech_threshold=0.1, # (3)
)
with gr.Blocks as demo:
audio = WebRTC(...)
audio.stream(ReplyOnPause(..., algo_options=algo_options)
)
demo.launch()
Stream(
handler=ReplyOnPause(..., algo_options=algo_options),
modality="audio",
mode="send-receive"
)
```
1. This is the length (in seconds) of audio chunks.
2. If the chunk has more than 0.2 seconds of speech, the user started talking.
3. If, after the user started speaking, there is a chunk with less than 0.1 seconds of speech, the user stopped speaking.
## Stream Handler Input Audio
You can configure the sampling rate of the audio passed to the `ReplyOnPause` or `StreamHandler` instance with the `input_sampling_rate` parameter. The current default is `48000`
```python
from gradio_webrtc import ReplyOnPause, WebRTC
from fastrtc import ReplyOnPause, Stream
with gr.Blocks as demo:
audio = WebRTC(...)
audio.stream(ReplyOnPause(..., input_sampling_rate=24000)
)
demo.launch()
stream = Stream(
handler=ReplyOnPause(..., input_sampling_rate=24000),
modality="audio",
mode="send-receive"
)
```
## Stream Handler Output Audio
You can configure the output audio chunk size of `ReplyOnPause` (and any `StreamHandler`)
You can configure the output audio chunk size of `ReplyOnPause` (and any `StreamHandler`)
with the `output_sample_rate` and `output_frame_size` parameters.
The following code (which uses the default values of these parameters), states that each output chunk will be a frame of 960 samples at a frame rate of `24,000` hz. So it will correspond to `0.04` seconds.
```python
from gradio_webrtc import ReplyOnPause, WebRTC
from fastrtc import ReplyOnPause, Stream
with gr.Blocks as demo:
audio = WebRTC(...)
audio.stream(ReplyOnPause(..., output_sample_rate=24000, output_frame_size=960)
)
demo.launch()
stream = Stream(
handler=ReplyOnPause(..., output_sample_rate=24000, output_frame_size=960),
modality="audio",
mode="send-receive"
)
```
!!! tip
@@ -109,7 +106,6 @@ demo.launch()
In general it is best to leave these settings untouched. In some cases,
lowering the output_frame_size can yield smoother audio playback.
## Audio Icon
You can display an icon of your choice instead of the default wave animation for audio streaming.
@@ -117,7 +113,15 @@ Pass any local path or url to an image (svg, png, jpeg) to the components `icon`
You can control the button color and pulse color with `icon_button_color` and `pulse_color` parameters. They can take any valid css color.
=== "Code"
# <<<<<<< HEAD
!!! warning
The `icon` parameter is only supported in the `WebRTC` component.
> > > > > > > video-chat
> > > > > > > === "Code"
``` python
audio = WebRTC(
label="Stream",
@@ -128,8 +132,9 @@ You can control the button color and pulse color with `icon_button_color` and `p
)
```
<img src="https://github.com/user-attachments/assets/fd2e70a3-1698-4805-a8cb-9b7b3bcf2198">
=== "Code Custom colors"
``` python
`python
audio = WebRTC(
label="Stream",
rtc_configuration=rtc_configuration,
@@ -139,16 +144,23 @@ You can control the button color and pulse color with `icon_button_color` and `p
icon_button_color="black",
pulse_color="black",
)
```
<img src="https://github.com/user-attachments/assets/39e9bb0b-53fb-448e-be44-d37f6785b4b6">
`
<img src="https://github.com/user-attachments/assets/39e9bb0b-53fb-448e-be44-d37f6785b4b6">
## Changing the Button Text
You can supply a `button_labels` dictionary to change the text displayed in the `Start`, `Stop` and `Waiting` buttons that are displayed in the UI.
The keys must be `"start"`, `"stop"`, and `"waiting"`.
``` python
# <<<<<<< HEAD
!!! warning
The `button_labels` parameter is only supported in the `WebRTC` component.
> > > > > > > video-chat
```python
webrtc = WebRTC(
label="Video Chat",
modality="audio-video",

View File

@@ -1,172 +1,340 @@
<style>
.tag-button {
cursor: pointer;
opacity: 0.5;
transition: opacity 0.2s ease;
}
.tag-button > code {
color: var(--supernova);
}
.tag-button.active {
opacity: 1;
}
</style>
A collection of applications built with FastRTC. Click on the tags below to find the app you're looking for!
<div class="tag-buttons">
<button class="tag-button" data-tag="audio"><code>Audio</code></button>
<button class="tag-button" data-tag="video"><code>Video</code></button>
<button class="tag-button" data-tag="llm"><code>LLM</code></button>
<button class="tag-button" data-tag="computer-vision"><code>Computer Vision</code></button>
<button class="tag-button" data-tag="real-time-api"><code>Real-time API</code></button>
<button class="tag-button" data-tag="voice-chat"><code>Voice Chat</code></button>
<button class="tag-button" data-tag="code-generation"><code>Code Generation</code></button>
<button class="tag-button" data-tag="stopword"><code>Stopword</code></button>
<button class="tag-button" data-tag="transcription"><code>Transcription</code></button>
<button class="tag-button" data-tag="sambanova"><code>SambaNova</code></button>
<button class="tag-button" data-tag="groq"><code>Groq</code></button>
<button class="tag-button" data-tag="elevenlabs"><code>ElevenLabs</code></button>
<button class="tag-button" data-tag="kyutai"><code>Kyutai</code></button>
<button class="tag-button" data-tag="agentic"><code>Agentic</code></button>
<button class="tag-button" data-tag="local"><code>Local Models</code></button>
</div>
<script>
function filterCards() {
const activeButtons = document.querySelectorAll('.tag-button.active');
const selectedTags = Array.from(activeButtons).map(button => button.getAttribute('data-tag'));
const cards = document.querySelectorAll('.grid.cards > ul > li > p[data-tags]');
cards.forEach(card => {
const cardTags = card.getAttribute('data-tags').split(',');
const shouldShow = selectedTags.length === 0 || selectedTags.some(tag => cardTags.includes(tag));
card.parentElement.style.display = shouldShow ? 'block' : 'none';
});
}
document.querySelectorAll('.tag-button').forEach(button => {
button.addEventListener('click', () => {
button.classList.toggle('active');
filterCards();
});
});
</script>
<div class="grid cards" markdown>
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __Gemini Audio Video Chat__
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } **Gemini Audio Video Chat**
{: data-tags="audio,video,real-time-api"}
---
---
Stream BOTH your webcam video and audio feeds to Google Gemini. You can also upload images to augment your conversation!
Stream BOTH your webcam video and audio feeds to Google Gemini. You can also upload images to augment your conversation!
<video width=98% src="https://github.com/user-attachments/assets/9636dc97-4fee-46bb-abb8-b92e69c08c71" controls style="text-align: center"></video>
<video width=98% src="https://github.com/user-attachments/assets/9636dc97-4fee-46bb-abb8-b92e69c08c71" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat/blob/main/app.py)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/gemini-audio-video)
- :speaking_head:{ .lg .middle } __Google Gemini Real Time Voice API__
[:octicons-arrow-right-24: Gradio UI](https://huggingface.co/spaces/fastrtc/gemini-audio-video)
---
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/gemini-audio-video/blob/main/app.py)
Talk to Gemini in real time using Google's voice API.
- :speaking_head:{ .lg .middle } **Google Gemini Real Time Voice API**
{: data-tags="audio,real-time-api,voice-chat"}
<video width=98% src="https://github.com/user-attachments/assets/da8c8a2a-5d99-4ac7-8927-0f7812e4146f" controls style="text-align: center"></video>
---
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/gemini-voice)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/gemini-voice/blob/main/app.py)
Talk to Gemini in real time using Google's voice API.
- :speaking_head:{ .lg .middle } __OpenAI Real Time Voice API__
<video width=98% src="https://github.com/user-attachments/assets/ea6d18cb-8589-422b-9bba-56332d9f61de" controls style="text-align: center"></video>
---
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/talk-to-gemini)
Talk to ChatGPT in real time using OpenAI's voice API.
[:octicons-arrow-right-24: Gradio UI](https://huggingface.co/spaces/fastrtc/talk-to-gemini-gradio)
<video width=98% src="https://github.com/user-attachments/assets/41a63376-43ec-496a-9b31-4f067d3903d6" controls style="text-align: center"></video>
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/talk-to-gemini/blob/main/app.py)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/openai-realtime-voice)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/openai-realtime-voice/blob/main/app.py)
- :speaking_head:{ .lg .middle } **OpenAI Real Time Voice API**
{: data-tags="audio,real-time-api,voice-chat"}
- :speaking_head:{ .lg .middle } __Hello Llama: Stop Word Detection__
---
---
Talk to ChatGPT in real time using OpenAI's voice API.
A code editor built with Llama 3.3 70b that is triggered by the phrase "Hello Llama".
Build a Siri-like coding assistant in 100 lines of code!
<video width=98% src="https://github.com/user-attachments/assets/178bdadc-f17b-461a-8d26-e915c632ff80" controls style="text-align: center"></video>
<video width=98% src="https://github.com/user-attachments/assets/3e10cb15-ff1b-4b17-b141-ff0ad852e613" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/talk-to-openai)
[:octicons-arrow-right-24: Demo](hhttps://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor/blob/main/app.py)
[:octicons-arrow-right-24: Gradio UI](https://huggingface.co/spaces/fastrtc/talk-to-openai-gradio)
- :robot:{ .lg .middle } __Llama Code Editor__
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/talk-to-openai/blob/main/app.py)
---
- :robot:{ .lg .middle } **Hello Computer**
{: data-tags="llm,stopword,sambanova"}
Create and edit HTML pages with just your voice! Powered by SambaNova systems.
---
<video width=98% src="https://github.com/user-attachments/assets/a09647f1-33e1-4154-a5a3-ffefda8a736a" controls style="text-align: center"></video>
Say computer before asking your question!
<video width=98% src="https://github.com/user-attachments/assets/afb2a3ef-c1ab-4cfb-872d-578f895a10d5" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/llama-code-editor)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/llama-code-editor/blob/main/app.py)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/hello-computer)
- :speaking_head:{ .lg .middle } __Audio Input/Output with mini-omni2__
[:octicons-arrow-right-24: Gradio UI](https://huggingface.co/spaces/fastrtc/hello-computer-gradio)
---
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/hello-computer/blob/main/app.py)
Build a GPT-4o like experience with mini-omni2, an audio-native LLM.
- :robot:{ .lg .middle } **Llama Code Editor**
{: data-tags="audio,llm,code-generation,groq,stopword"}
<video width=98% src="https://github.com/user-attachments/assets/58c06523-fc38-4f5f-a4ba-a02a28e7fa9e" controls style="text-align: center"></video>
---
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc/blob/main/app.py)
Create and edit HTML pages with just your voice! Powered by Groq!
- :speaking_head:{ .lg .middle } __Talk to Claude__
<video width=98% src="https://github.com/user-attachments/assets/98523cf3-dac8-4127-9649-d91a997e3ef5" controls style="text-align: center"></video>
---
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/llama-code-editor)
Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/llama-code-editor/blob/main/app.py)
<video width=98% src="https://github.com/user-attachments/assets/650bc492-798e-4995-8cef-159e1cfc2185" controls style="text-align: center"></video>
- :speaking_head:{ .lg .middle } **SmolAgents with Voice**
{: data-tags="audio,llm,voice-chat,agentic"}
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-claude)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-claude/blob/main/app.py)
---
- :speaking_head:{ .lg .middle } __Kyutai Moshi__
Build a voice-based smolagent to find a coworking space!
---
<video width=98% src="https://github.com/user-attachments/assets/ddf39ef7-fa7b-417e-8342-de3b9e311891" controls style="text-align: center"></video>
Kyutai's moshi is a novel speech-to-speech model for modeling human conversations.
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/burtenshaw/coworking_agent/)
<video width=98% src="https://github.com/user-attachments/assets/becc7a13-9e89-4a19-9df2-5fb1467a0137" controls style="text-align: center"></video>
[:octicons-code-16: Code](https://huggingface.co/spaces/burtenshaw/coworking_agent/blob/main/app.py)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-moshi)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-moshi/blob/main/app.py)
- :speaking_head:{ .lg .middle } **Talk to Claude**
{: data-tags="audio,llm,voice-chat"}
- :speaking_head:{ .lg .middle } __Talk to Ultravox__
---
---
Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.
Talk to Fixie.AI's audio-native Ultravox LLM with the transformers library.
<video width=98% src="https://github.com/user-attachments/assets/fb6ef07f-3ccd-444a-997b-9bc9bdc035d3" controls style="text-align: center"></video>
<video width=98% src="https://github.com/user-attachments/assets/e6e62482-518c-4021-9047-9da14cd82be1" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/burtenshaw/coworking_agent)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox)
[:octicons-arrow-right-24: Gradio UI](https://huggingface.co/spaces/burtenshaw/coworking_agent)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox/blob/main/app.py)
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/talk-to-claude/blob/main/app.py)
- :musical_note:{ .lg .middle } **LLM Voice Chat**
{: data-tags="audio,llm,voice-chat,groq,elevenlabs"}
- :speaking_head:{ .lg .middle } __Talk to Llama 3.2 3b__
---
---
Talk to an LLM with ElevenLabs!
Use the Lepton API to make Llama 3.2 talk back to you!
<video width=98% src="https://github.com/user-attachments/assets/584e898b-91af-4816-bbb0-dd3216eb80b0" controls style="text-align: center"></video>
<video width=98% src="https://github.com/user-attachments/assets/3ee37a6b-0892-45f5-b801-73188fdfad9a" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/llm-voice-chat)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc)
[:octicons-arrow-right-24: Gradio UI](https://huggingface.co/spaces/fastrtc/llm-voice-chat-gradio)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc/blob/main/app.py)
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/llm-voice-chat/blob/main/app.py)
- :musical_note:{ .lg .middle } **Whisper Transcription**
{: data-tags="audio,transcription,groq"}
- :robot:{ .lg .middle } __Talk to Qwen2-Audio__
---
---
Have whisper transcribe your speech in real time!
Qwen2-Audio is a SOTA audio-to-text LLM developed by Alibaba.
<video width=98% src="https://github.com/user-attachments/assets/87603053-acdc-4c8a-810f-f618c49caafb" controls style="text-align: center"></video>
<video width=98% src="https://github.com/user-attachments/assets/c821ad86-44cc-4d0c-8dc4-8c02ad1e5dc8" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/whisper-realtime)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc)
[:octicons-arrow-right-24: Gradio UI](https://huggingface.co/spaces/fastrtc/whisper-realtime-gradio)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc/blob/main/app.py)
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/whisper-realtime/blob/main/app.py)
- :robot:{ .lg .middle } **Talk to Sambanova**
{: data-tags="llm,stopword,sambanova"}
- :camera:{ .lg .middle } __Yolov10 Object Detection__
---
---
Talk to Llama 3.2 with the SambaNova API.
<video width=98% src="https://github.com/user-attachments/assets/92e4a45a-b5e9-45cd-b7f4-9339ceb343e1" controls style="text-align: center"></video>
Run the Yolov10 model on a user webcam stream in real time!
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/talk-to-sambanova)
<video width=98% src="https://github.com/user-attachments/assets/c90d8c9d-d2d5-462e-9e9b-af969f2ea73c" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Gradio UI](https://huggingface.co/spaces/fastrtc/talk-to-sambanova-gradio)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n)
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/talk-to-sambanova/blob/main/app.py)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/app.py)
- :speaking_head:{ .lg .middle } **Hello Llama: Stop Word Detection**
{: data-tags="audio,llm,code-generation,stopword,sambanova"}
- :camera:{ .lg .middle } __Video Object Detection with RT-DETR__
---
---
A code editor built with Llama 3.3 70b that is triggered by the phrase "Hello Llama".
Build a Siri-like coding assistant in 100 lines of code!
Upload a video and stream out frames with detected objects (powered by RT-DETR) model.
<video width=98% src="https://github.com/user-attachments/assets/3e10cb15-ff1b-4b17-b141-ff0ad852e613" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc/blob/main/app.py)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor/blob/main/app.py)
- :speaker:{ .lg .middle } __Text-to-Speech with Parler__
- :speaking_head:{ .lg .middle } **Audio Input/Output with mini-omni2**
{: data-tags="audio,llm,voice-chat"}
---
---
Stream out audio generated by Parler TTS!
Build a GPT-4o like experience with mini-omni2, an audio-native LLM.
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc)
<video width=98% src="https://github.com/user-attachments/assets/58c06523-fc38-4f5f-a4ba-a02a28e7fa9e" controls style="text-align: center"></video>
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc/blob/main/app.py)
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc/blob/main/app.py)
</div>
- :speaking_head:{ .lg .middle } **Kyutai Moshi**
{: data-tags="audio,llm,voice-chat,kyutai"}
---
Kyutai's moshi is a novel speech-to-speech model for modeling human conversations.
<video width=98% src="https://github.com/user-attachments/assets/becc7a13-9e89-4a19-9df2-5fb1467a0137" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-moshi)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-moshi/blob/main/app.py)
- :speaking_head:{ .lg .middle } **Talk to Ultravox**
{: data-tags="audio,llm,voice-chat"}
---
Talk to Fixie.AI's audio-native Ultravox LLM with the transformers library.
<video width=98% src="https://github.com/user-attachments/assets/e6e62482-518c-4021-9047-9da14cd82be1" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox/blob/main/app.py)
- :speaking_head:{ .lg .middle } **Talk to Llama 3.2 3b**
{: data-tags="audio,llm,voice-chat"}
---
Use the Lepton API to make Llama 3.2 talk back to you!
<video width=98% src="https://github.com/user-attachments/assets/3ee37a6b-0892-45f5-b801-73188fdfad9a" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc/blob/main/app.py)
- :robot:{ .lg .middle } **Talk to Qwen2-Audio**
{: data-tags="audio,llm,voice-chat"}
---
Qwen2-Audio is a SOTA audio-to-text LLM developed by Alibaba.
<video width=98% src="https://github.com/user-attachments/assets/c821ad86-44cc-4d0c-8dc4-8c02ad1e5dc8" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc/blob/main/app.py)
- :camera:{ .lg .middle } **Yolov10 Object Detection**
{: data-tags="video,computer-vision"}
---
Run the Yolov10 model on a user webcam stream in real time!
<video width=98% src="https://github.com/user-attachments/assets/f82feb74-a071-4e81-9110-a01989447ceb" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/object-detection)
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/object-detection/blob/main/app.py)
- :camera:{ .lg .middle } **Video Object Detection with RT-DETR**
{: data-tags="video,computer-vision"}
---
Upload a video and stream out frames with detected objects (powered by RT-DETR) model.
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc/blob/main/app.py)
- :speaker:{ .lg .middle } **Text-to-Speech with Parler**
{: data-tags="audio"}
---
Stream out audio generated by Parler TTS!
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc)
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc/blob/main/app.py)
- :speaking_head:{ .lg .middle } **Real Time Transcription with On-device Whisper 🤗**
{: data-tags="audio,transcription,local"}
---
Transcribe speech in real time using Whisper via the Transformers library, running on your device!
[:octicons-code-16: Code](https://github.com/sofi444/realtime-transcription-fastrtc/blob/main/main.py)
- :speaking_head:{ .lg .middle } __Talk to Claude - Electron App__
{: data-tags="audio,electron"}
---
An Electron desktop application that uses FastRTC to enable voice conversations with Claude.
<video width=98% src="https://github.com/user-attachments/assets/df4628e4-ef0f-4a78-ab9b-1ed2374b1cae" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://github.com/swairshah/voice-agent)
[:octicons-code-16: Code](https://github.com/swairshah/voice-agent)
</div>

View File

@@ -1,43 +1,47 @@
When deploying in a cloud environment (like Hugging Face Spaces, EC2, etc), you need to set up a TURN server to relay the WebRTC traffic.
When deploying in cloud environments with firewalls (like Hugging Face Spaces, RunPod), your WebRTC connections may be blocked from making direct connections. In these cases, you need a TURN server to relay the audio/video traffic between users. This guide covers different options for setting up FastRTC to connect to a TURN server.
!!! tip
The `rtc_configuration` parameter of the `Stream` class also be passed to the [`WebRTC`](../userguide/gradio) component directly if you're building a standalone gradio app.
## Community Server
Hugging Face graciously provides a TURN server for the community.
In order to use it, you need to first create a Hugging Face account by going to the [huggingface.co](https://huggingface.co/).
In order to use it, you need to first create a Hugging Face account by going to [huggingface.co](https://huggingface.co/).
Then navigate to this [space](https://huggingface.co/spaces/freddyaboulton/turn-server-login) and follow the instructions on the page. You just have to click the "Log in" button and then the "Sign Up" button.
Then navigate to this [space](https://huggingface.co/spaces/fastrtc/turn-server-login) and follow the instructions on the page. You just have to click the "Log in" button and then the "Sign Up" button.
![turn_login](https://github.com/user-attachments/assets/d077c3a3-7059-45d6-8e50-eb3d8a4aa43f)
![turn_login](https://github.com/user-attachments/assets/cefa8dec-487e-47d8-bb96-1a14a701f6e5)
Then you can use the `get_hf_turn_credentials` helper to get your credentials:
```python
from gradio_webrtc import get_hf_turn_credentials, WebRTC
from fastrtc import get_hf_turn_credentials, Stream
# Pass a valid access token for your Hugging Face account
# or set the HF_TOKEN environment variable
# or set the HF_TOKEN environment variable
credentials = get_hf_turn_credentials(token=None)
with gr.Blcocks() as demo:
webrtc = WebRTC(rtc_configuration=credentials)
...
demo.launch()
Stream(
handler=...,
rtc_configuration=credentials,
modality="audio",
mode="send-receive"
)
```
!!! warning
This is a shared resource so we make no latency/availability guarantees.
For more robust options, see the Twilio and self-hosting options below.
For more robust options, see the Twilio, Cloudflare and self-hosting options below.
## Twilio API
The easiest way to do this is to use a service like Twilio.
An easy way to do this is to use a service like Twilio.
Create a **free** [account](https://login.twilio.com/u/signup) and the install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
```python
from fastrtc import Stream
from twilio.rest import Client
import os
@@ -53,13 +57,15 @@ rtc_configuration = {
"iceTransportPolicy": "relay",
}
with gr.Blocks() as demo:
...
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
...
Stream(
handler=...,
rtc_configuration=rtc_configuration,
modality="audio",
mode="send-receive"
)
```
!!! tip "Automatic Login"
!!! tip "Automatic login"
You can log in automatically with the `get_twilio_turn_credentials` helper
@@ -71,6 +77,50 @@ with gr.Blocks() as demo:
rtc_configuration = get_twilio_turn_credentials()
```
## Cloudflare Calls API
Cloudflare also offers a managed TURN server with [Cloudflare Calls](https://www.cloudflare.com/en-au/developer-platform/products/cloudflare-calls/).
Create a **free** [account](https://developers.cloudflare.com/fundamentals/setup/account/create-account/) and head to the [Calls section in your dashboard](https://dash.cloudflare.com/?to=/:account/calls).
Choose `Create -> TURN App`, give it a name (like `fastrtc-demo`), and then hit the Create button.
Take note of the Turn Token ID (often exported as `TURN_KEY_ID`) and API Token (exported as `TURN_KEY_API_TOKEN`).
You can then connect from the WebRTC component like so:
```python
from fastrtc import Stream
import requests
import os
turn_key_id = os.environ.get("TURN_KEY_ID")
turn_key_api_token = os.environ.get("TURN_KEY_API_TOKEN")
ttl = 86400 # Can modify TTL, here it's set to 24 hours
response = requests.post(
f"https://rtc.live.cloudflare.com/v1/turn/keys/{turn_key_id}/credentials/generate-ice-servers",
headers={
"Authorization": f"Bearer {turn_key_api_token}",
"Content-Type": "application/json",
},
json={"ttl": ttl},
)
if response.ok:
rtc_configuration = response.json()
else:
raise Exception(
f"Failed to get TURN credentials: {response.status_code} {response.text}"
)
stream = Stream(
handler=...,
rtc_configuration=rtc_configuration,
modality="audio",
mode="send-receive",
)
```
## Self Hosting
We have developed a script that can automatically deploy a TURN server to Amazon Web Services (AWS). You can follow the instructions [here](https://github.com/freddyaboulton/turn-server-deploy) or this guide.
@@ -84,7 +134,6 @@ Log into your AWS account and create an IAM user with the following permissions:
- [AWSCloudFormationFullAccess](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAWSCloudFormationFullAccess)
- [AmazonEC2FullAccess](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAmazonEC2FullAccess)
Create a key pair for this user and write down the "access key" and "secret access key". Then log into the aws cli with these credentials (`aws configure`).
Finally, create an ec2 keypair (replace `your-key-name` with the name you want to give it).
@@ -102,7 +151,6 @@ Open the `parameters.json` file and fill in the correct values for all the param
- `TurnPassword`: The password needed to connect to the server.
- `InstanceType`: One of the following values `t3.micro`, `t3.small`, `t3.medium`, `c4.large`, `c5.large`.
Then run the deployment script:
```bash
@@ -132,24 +180,23 @@ The `server-info.json` file will have the server's public IP and public DNS:
```json
[
{
"OutputKey": "PublicIP",
"OutputValue": "35.173.254.80",
"Description": "Public IP address of the TURN server"
},
{
"OutputKey": "PublicDNS",
"OutputValue": "ec2-35-173-254-80.compute-1.amazonaws.com",
"Description": "Public DNS name of the TURN server"
}
{
"OutputKey": "PublicIP",
"OutputValue": "35.173.254.80",
"Description": "Public IP address of the TURN server"
},
{
"OutputKey": "PublicDNS",
"OutputValue": "ec2-35-173-254-80.compute-1.amazonaws.com",
"Description": "Public DNS name of the TURN server"
}
]
```
Finally, you can connect to your EC2 server from the gradio WebRTC component via the `rtc_configuration` argument:
```python
import gradio as gr
from gradio_webrtc import WebRTC
from fastrtc import Stream
rtc_configuration = {
"iceServers": [
{
@@ -159,7 +206,10 @@ rtc_configuration = {
},
]
}
with gr.Blocks() as demo:
webrtc = WebRTC(rtc_configuration=rtc_configuration)
```
Stream(
handler=...,
rtc_configuration=rtc_configuration,
modality="audio",
mode="send-receive"
)
```

View File

@@ -1,34 +1,37 @@
## Demo does not work when deploying to the cloud
Make sure you are using a TURN server. See [deployment](/deployment).
Make sure you are using a TURN server. See [deployment](../deployment).
## Recorded input audio sounds muffled during output audio playback
By default, the microphone is [configured](https://github.com/freddyaboulton/gradio-webrtc/blob/903f1f70bd586f638ad3b5a3940c7a8ec70ad1f5/backend/gradio_webrtc/webrtc.py#L575) to do echoCancellation.
By default, the microphone is [configured](https://github.com/freddyaboulton/gradio-webrtc/blob/903f1f70bd586f638ad3b5a3940c7a8ec70ad1f5/backend/gradio_webrtc/webrtc.py#L575) to do echo cancellation.
This is what's causing the recorded audio to sound muffled when the streamed audio starts playing.
You can disable this via the `track_constraints` (see [advanced configuration](./advanced-configuration])) with the following code:
You can disable this via the `track_constraints` (see [Advanced Configuration](../advanced-configuration)) with the following code:
```python
audio = WebRTC(
label="Stream",
track_constraints={
"echoCancellation": False,
"noiseSuppression": {"exact": True},
"autoGainControl": {"exact": True},
"sampleRate": {"ideal": 24000},
"sampleSize": {"ideal": 16},
"channelCount": {"exact": 1},
},
rtc_configuration=None,
mode="send-receive",
modality="audio",
)
stream = Stream(
track_constraints={
"echoCancellation": False,
"noiseSuppression": {"exact": True},
"autoGainControl": {"exact": True},
"sampleRate": {"ideal": 24000},
"sampleSize": {"ideal": 16},
"channelCount": {"exact": 1},
},
rtc_configuration=None,
mode="send-receive",
modality="audio",
)
```
## How to raise errors in the UI
You can raise `WebRTCError` in order for an error message to show up in the user's screen. This is similar to how `gr.Error` works.
!!! warning
The `WebRTCError` class is only supported in the `WebRTC` component.
Here is a simple example:
```python
@@ -64,4 +67,4 @@ with gr.Blocks() as demo:
)
demo.launch()
```
```

BIN
docs/fastrtc_logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 918 KiB

BIN
docs/fastrtc_logo_small.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

@@ -0,0 +1,20 @@
<svg width="2016" height="703" viewBox="0 0 2016 703" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M766.251 523.254L795.698 486.91C811.975 501.82 830.116 509.276 850.121 509.276C863.54 509.276 874.598 507.287 883.296 503.311C891.993 499.335 896.342 493.868 896.342 486.91C896.342 475.106 886.713 469.204 867.454 469.204C862.235 469.204 854.469 469.826 844.157 471.068C833.844 472.311 826.078 472.932 820.859 472.932C788.802 472.932 772.774 461.439 772.774 438.452C772.774 431.867 775.445 425.406 780.788 419.069C786.131 412.732 792.344 408.073 799.426 405.09C776.688 390.305 765.319 369.368 765.319 342.281C765.319 320.91 773.147 303.266 788.802 289.35C804.458 275.31 823.717 268.289 846.579 268.289C864.472 268.289 879.444 271.644 891.496 278.354L909.761 257.107L942.005 286.368L919.826 302.583C927.529 314.262 931.381 328.054 931.381 343.959C931.381 366.697 924.423 384.9 910.507 398.567C896.715 412.111 879.258 418.882 858.135 418.882C854.78 418.882 850.307 418.572 844.716 417.951L837.074 416.832C836.204 416.832 832.85 418.199 827.01 420.933C821.294 423.542 818.436 426.275 818.436 429.133C818.436 434.103 822.723 436.588 831.297 436.588C835.148 436.588 841.609 435.656 850.68 433.793C859.75 431.929 867.516 430.997 873.977 430.997C919.329 430.997 942.005 449.2 942.005 485.606C942.005 505.734 932.934 521.514 914.794 532.945C896.653 544.501 874.785 550.279 849.189 550.279C818.623 550.279 790.977 541.27 766.251 523.254ZM812.845 342.468C812.845 354.272 816.076 363.777 822.537 370.983C829.122 378.066 837.944 381.607 849.002 381.607C860.061 381.607 868.572 378.128 874.536 371.17C880.5 364.212 883.482 354.644 883.482 342.468C883.482 332.403 880.252 323.892 873.791 316.934C867.454 309.976 859.191 306.497 849.002 306.497C838.317 306.497 829.619 309.852 822.91 316.561C816.2 323.271 812.845 331.906 812.845 342.468ZM1088.12 315.816C1079.8 310.349 1070.67 307.615 1060.73 307.615C1049.92 307.615 1040.29 312.523 1031.84 322.339C1023.51 332.155 1019.35 344.145 1019.35 358.31V472H972.757V272.39H1019.35V290.655C1032.4 275.993 1049.73 268.662 1071.35 268.662C1087.26 268.662 1099.43 271.085 1107.88 275.931L1088.12 315.816ZM1235.18 452.058C1230.95 459.016 1223.56 464.731 1213 469.204C1202.56 473.553 1191.63 475.728 1180.2 475.728C1158.7 475.728 1141.8 470.385 1129.5 459.699C1117.2 448.889 1111.05 433.606 1111.05 413.85C1111.05 390.739 1119.68 372.661 1136.96 359.614C1154.35 346.568 1179.01 340.045 1210.95 340.045C1216.41 340.045 1222.88 340.977 1230.33 342.84C1230.33 319.357 1215.48 307.615 1185.79 307.615C1168.27 307.615 1153.61 310.535 1141.8 316.375L1131.74 280.218C1147.77 272.514 1166.84 268.662 1188.96 268.662C1219.4 268.662 1241.7 275.62 1255.86 289.536C1270.03 303.328 1277.11 329.545 1277.11 368.188V410.868C1277.11 437.458 1282.45 454.17 1293.14 461.004C1289.29 467.713 1285 471.814 1280.28 473.305C1275.56 474.92 1270.15 475.728 1264.07 475.728C1257.36 475.728 1251.33 473.243 1245.99 468.272C1240.64 463.302 1237.04 457.897 1235.18 452.058ZM1230.7 378.066C1222.75 376.45 1216.79 375.643 1212.81 375.643C1176.03 375.643 1157.64 387.695 1157.64 411.8C1157.64 429.692 1168.02 438.638 1188.77 438.638C1216.73 438.638 1230.7 424.66 1230.7 396.703V378.066ZM1456.03 472V459.885C1452.18 464.11 1445.66 467.838 1436.46 471.068C1427.27 474.174 1417.76 475.728 1407.95 475.728C1380.12 475.728 1358.19 466.906 1342.16 449.262C1326.25 431.618 1318.3 407.016 1318.3 375.456C1318.3 343.897 1327.43 318.239 1345.7 298.483C1364.09 278.602 1387.07 268.662 1414.66 268.662C1429.82 268.662 1443.61 271.768 1456.03 277.981V198.025L1502.63 186.842V472H1456.03ZM1456.03 320.102C1446.09 312.15 1435.72 308.174 1424.91 308.174C1406.27 308.174 1391.92 313.89 1381.86 325.321C1371.79 336.628 1366.76 352.905 1366.76 374.152C1366.76 415.652 1386.76 436.402 1426.77 436.402C1431.25 436.402 1436.71 435.097 1443.17 432.488C1449.76 429.754 1454.05 427.021 1456.03 424.287V320.102ZM1585.94 195.043C1593.39 195.043 1599.73 197.714 1604.95 203.057C1610.29 208.276 1612.96 214.613 1612.96 222.068C1612.96 229.523 1610.29 235.922 1604.95 241.265C1599.73 246.483 1593.39 249.092 1585.94 249.092C1578.48 249.092 1572.09 246.483 1566.74 241.265C1561.52 235.922 1558.91 229.523 1558.91 222.068C1558.91 214.613 1561.52 208.276 1566.74 203.057C1572.09 197.714 1578.48 195.043 1585.94 195.043ZM1561.9 472V310.597H1536.36V272.39H1609.05V472H1561.9ZM1650.43 371.729C1650.43 341.287 1659.19 316.499 1676.71 297.364C1694.35 278.229 1717.58 268.662 1746.41 268.662C1776.73 268.662 1800.27 277.857 1817.05 296.246C1833.82 314.635 1842.21 339.796 1842.21 371.729C1842.21 403.537 1833.63 428.823 1816.49 447.585C1799.47 466.347 1776.11 475.728 1746.41 475.728C1716.09 475.728 1692.49 466.284 1675.59 447.398C1658.81 428.388 1650.43 403.165 1650.43 371.729ZM1698.88 371.729C1698.88 415.714 1714.73 437.707 1746.41 437.707C1760.95 437.707 1772.44 431.991 1780.89 420.56C1789.46 409.129 1793.75 392.852 1793.75 371.729C1793.75 328.365 1777.97 306.683 1746.41 306.683C1731.87 306.683 1720.32 312.399 1711.74 323.83C1703.17 335.261 1698.88 351.227 1698.88 371.729Z" fill="#232D36"/>
<path d="M405.5 321L204 436.5L405.5 552L607 436.5L405.5 321Z" stroke="url(#paint0_linear_94_11)" stroke-width="59" stroke-linejoin="round"/>
<path d="M405.5 208L204 323.5L405.5 439L607 323.5L405.5 208Z" stroke="url(#paint1_linear_94_11)" stroke-width="59" stroke-linejoin="round"/>
<path d="M204 436L406 321" stroke="url(#paint2_linear_94_11)" stroke-width="59" stroke-linejoin="bevel"/>
<defs>
<linearGradient id="paint0_linear_94_11" x1="178" y1="436" x2="547.5" y2="436" gradientUnits="userSpaceOnUse">
<stop stop-color="#F9D100"/>
<stop offset="1" stop-color="#F97700"/>
</linearGradient>
<linearGradient id="paint1_linear_94_11" x1="631.5" y1="323" x2="261.5" y2="323" gradientUnits="userSpaceOnUse">
<stop stop-color="#F9D100"/>
<stop offset="1" stop-color="#F97700"/>
</linearGradient>
<linearGradient id="paint2_linear_94_11" x1="178" y1="436" x2="546.987" y2="433.811" gradientUnits="userSpaceOnUse">
<stop stop-color="#F9D100"/>
<stop offset="1" stop-color="#F97700"/>
</linearGradient>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 5.9 KiB

1
docs/gradio-logo.svg Normal file
View File

@@ -0,0 +1 @@
<svg width='576' height='576' viewBox='0 0 576 576' fill='none' xmlns='http://www.w3.org/2000/svg'><path d='M287.5 229L86 344.5L287.5 460L489 344.5L287.5 229Z' stroke='url(#paint0_linear_102_7)' stroke-width='59' stroke-linejoin='round'/><path d='M287.5 116L86 231.5L287.5 347L489 231.5L287.5 116Z' stroke='url(#paint1_linear_102_7)' stroke-width='59' stroke-linejoin='round'/><path d='M86 344L288 229' stroke='url(#paint2_linear_102_7)' stroke-width='59' stroke-linejoin='bevel'/><defs><linearGradient id='paint0_linear_102_7' x1='60' y1='344' x2='429.5' y2='344' gradientUnits='userSpaceOnUse'><stop stop-color='#F9D100'/><stop offset='1' stop-color='#F97700'/></linearGradient><linearGradient id='paint1_linear_102_7' x1='513.5' y1='231' x2='143.5' y2='231' gradientUnits='userSpaceOnUse'><stop stop-color='#F9D100'/><stop offset='1' stop-color='#F97700'/></linearGradient><linearGradient id='paint2_linear_102_7' x1='60' y1='344' x2='428.987' y2='341.811' gradientUnits='userSpaceOnUse'><stop stop-color='#F9D100'/><stop offset='1' stop-color='#F97700'/></linearGradient></defs></svg>

After

Width:  |  Height:  |  Size: 1.1 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 46 KiB

8
docs/hf-logo.svg Normal file

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -1,30 +1,192 @@
<h1 style='text-align: center; margin-bottom: 1rem; color: white;'> Gradio WebRTC ⚡️ </h1>
<div style='text-align: center; margin-bottom: 1rem; display: flex; justify-content: center; align-items: center;'>
<h1 style='color: white; margin: 0;'>FastRTC</h1>
<img src="/fastrtc_logo.png"
onerror="this.onerror=null; this.src='https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/fastrtc_logo.png';"
alt="FastRTC Logo"
style="height: 40px; margin-right: 10px;">
</div>
<div style="display: flex; flex-direction: row; justify-content: center">
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/gradio_webrtc">
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/fastrtc">
<a href="https://github.com/freddyaboulton/fastrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
</div>
<h3 style='text-align: center'>
Stream video and audio in real time with Gradio using WebRTC.
The Real-Time Communication Library for Python.
</h3>
Turn any python function into a real-time audio and video stream over WebRTC or WebSockets.
<video src="https://github.com/user-attachments/assets/a297aa1e-ff42-448c-a58c-389b0a575d4d" controls></video>
## Installation
```bash
pip install gradio_webrtc
pip install fastrtc
```
to use built-in pause detection (see [ReplyOnPause](/user-guide/#reply-on-pause)), install the `vad` extra:
to use built-in pause detection (see [ReplyOnPause](userguide/audio/#reply-on-pause)), speech-to-text (see [Speech To Text](userguide/audio/#speech-to-text)), and text to speech (see [Text To Speech](userguide/audio/#text-to-speech)), install the `vad`, `stt`, and `tts` extras:
```bash
pip install gradio_webrtc[vad]
pip install "fastrtc[vad, stt, tts]"
```
For stop word detection (see [ReplyOnStopWords](/user-guide/#reply-on-stopwords)), install the `stopword` extra:
```bash
pip install gradio_webrtc[stopword]
```
## Quickstart
Import the [Stream](userguide/streams) class and pass in a [handler](userguide/streams/#handlers).
The `Stream` has three main methods:
- `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/).
- `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.
- `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system.
=== "Echo Audio"
```python
from fastrtc import Stream, ReplyOnPause
import numpy as np
def echo(audio: tuple[int, np.ndarray]):
# The function will be passed the audio until the user pauses
# Implement any iterator that yields audio
# See "LLM Voice Chat" for a more complete example
yield audio
stream = Stream(
handler=ReplyOnPause(echo),
modality="audio",
mode="send-receive",
)
```
=== "LLM Voice Chat"
```py
import os
from fastrtc import (ReplyOnPause, Stream, get_stt_model, get_tts_model)
from openai import OpenAI
sambanova_client = OpenAI(
api_key=os.getenv("SAMBANOVA_API_KEY"), base_url="https://api.sambanova.ai/v1"
)
stt_model = get_stt_model()
tts_model = get_tts_model()
def echo(audio):
prompt = stt_model.stt(audio)
response = sambanova_client.chat.completions.create(
model="Meta-Llama-3.2-3B-Instruct",
messages=[{"role": "user", "content": prompt}],
max_tokens=200,
)
prompt = response.choices[0].message.content
for audio_chunk in tts_model.stream_tts_sync(prompt):
yield audio_chunk
stream = Stream(ReplyOnPause(echo), modality="audio", mode="send-receive")
```
=== "Webcam Stream"
```python
from fastrtc import Stream
import numpy as np
def flip_vertically(image):
return np.flip(image, axis=0)
stream = Stream(
handler=flip_vertically,
modality="video",
mode="send-receive",
)
```
=== "Object Detection"
```python
from fastrtc import Stream
import gradio as gr
import cv2
from huggingface_hub import hf_hub_download
from .inference import YOLOv10
model_file = hf_hub_download(
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
)
# git clone https://huggingface.co/spaces/fastrtc/object-detection
# for YOLOv10 implementation
model = YOLOv10(model_file)
def detection(image, conf_threshold=0.3):
image = cv2.resize(image, (model.input_width, model.input_height))
new_image = model.detect_objects(image, conf_threshold)
return cv2.resize(new_image, (500, 500))
stream = Stream(
handler=detection,
modality="video",
mode="send-receive",
additional_inputs=[
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
]
)
```
Run:
=== "UI"
```py
stream.ui.launch()
```
=== "Telephone"
```py
stream.fastphone()
```
=== "FastAPI"
```py
app = FastAPI()
stream.mount(app)
# Optional: Add routes
@app.get("/")
async def _():
return HTMLResponse(content=open("index.html").read())
# uvicorn app:app --host 0.0.0.0 --port 8000
```
Learn more about the [Stream](userguide/streams) in the user guide.
## Key Features
:speaking_head:{ .lg } Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user.
:material-laptop:{ .lg } Automatic UI - Use the `.ui.launch()` method to launch the webRTC-enabled built-in Gradio UI.
:material-lightning-bolt:{ .lg } Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!
:simple-webstorm:{ .lg } Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!
:telephone:{ .lg } Automatic Telephone Support - Use the `fastphone()` method of the stream to launch the application and get a free temporary phone number!
:robot:{ .lg } Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example on how to serve a custom JS frontend.
## Examples
See the [cookbook](/cookbook)
See the [cookbook](/cookbook).
Follow and join or [organization](https://huggingface.co/fastrtc) on Hugging Face!
<div style="display: flex; flex-direction: row; justify-content: center; align-items: center; max-width: 600px; margin: 0 auto;">
<img style="display: block; height: 100px; margin-right: 20px;" src="/hf-logo-with-title.svg">
<img style="display: block; height: 100px;" src="/gradio-logo-with-title.svg">
</div>

View File

@@ -0,0 +1,117 @@
<style>
.tag-button {
cursor: pointer;
opacity: 0.5;
transition: opacity 0.2s ease;
}
.tag-button > code {
color: var(--supernova);
}
.tag-button.active {
opacity: 1;
}
</style>
A collection of Speech-to-Text models ready to use with FastRTC. Click on the tags below to find the STT model you're looking for!
!!! tip "Note"
The model you want to use does not have to be in the gallery. This is just a collection of models with a common interface that are easy to "plug and play" into your FastRTC app. But You can use any model you want without having to do any special setup. Simply use it from your stream handler!
<div class="tag-buttons">
<button class="tag-button" data-tag="pytorch"><code>pytorch</code></button>
</div>
<script>
function filterCards() {
const activeButtons = document.querySelectorAll('.tag-button.active');
const selectedTags = Array.from(activeButtons).map(button => button.getAttribute('data-tag'));
const cards = document.querySelectorAll('.grid.cards > ul > li > p[data-tags]');
cards.forEach(card => {
const cardTags = card.getAttribute('data-tags').split(',');
const shouldShow = selectedTags.length === 0 || selectedTags.some(tag => cardTags.includes(tag));
card.parentElement.style.display = shouldShow ? 'block' : 'none';
});
}
document.querySelectorAll('.tag-button').forEach(button => {
button.addEventListener('click', () => {
button.classList.toggle('active');
filterCards();
});
});
</script>
<div class="grid cards" markdown>
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } distil-whisper-FastRTC
{: data-tags="pytorch"}
---
Description:
[Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wraped in a pypi package for plug and play!
Install Instructions
```python
pip install distil-whisper-fastrtc
```
Use it the same way you would the native fastRTC TTS model!
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/Codeblockz/llm-voice-chat/)
[:octicons-code-16: Repository](https://github.com/Codeblockz/distil-whisper-FastRTC)
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __Your STT Model__
{: data-tags="pytorch"}
---
Description
Install Instructions
Usage
[:octicons-arrow-right-24: Demo](Your demo here)
[:octicons-code-16: Repository](Code here)
</div>
## How to add your own STT model
1. Your model can be implemented in **any** framework you want but it must implement the `STTModel` protocol.
```python
class STTModel(Protocol):
def stt(self, audio: tuple[int, NDArray[np.int16 | np.float32]]) -> str: ...
```
* The `stt` method should take in an audio tuple `(sample_rate, audio_array)` and return a string of the transcribed text.
* The `audio` tuple should be of the form `(sample_rate, audio_array)` where `sample_rate` is the sample rate of the audio array and `audio_array` is a numpy array of the audio data. It can be of type `np.int16` or `np.float32`.
2. Once you have your model implemented, you can use it in your handler!
```python
from fastrtc import Stream, AdditionalOutputs, ReplyOnPause
from your_model import YourModel
model = YourModel() # implement the STTModel protocol
def echo(audio):
text = model.stt(audio)
yield AdditionalOutputs(text)
stream = Stream(ReplyOnPause(echo), mode="send-receive", modality="audio",
additional_outputs=[gr.Textbox(label="Transcription")],
additional_outputs_handler=lambda old,new:old + new)
stream.ui.launch()
```
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.

View File

@@ -0,0 +1,63 @@
:root {
--white: #ffffff;
--galaxy: #393931;
--space: #2d2d2a;
--rock: #2d2d2a;
--cosmic: #ffdd00c5;
--radiate: #d6cec0;
--sun: #ffac2f;
--neutron: #F7F5F6;
--supernova: #ffdd00;
--asteroid: #d6cec0;
}
[data-md-color-scheme="fastrtc-dark"] {
--md-default-bg-color: var(--galaxy);
--md-default-fg-color: var(--white);
--md-default-fg-color--light: var(--white);
--md-default-fg-color--lighter: var(--white);
--md-primary-fg-color: var(--space);
--md-primary-bg-color: var(--white);
--md-accent-fg-color: var(--sun);
--md-typeset-color: var(--white);
--md-typeset-a-color: var(--supernova);
--md-typeset-mark-color: var(--sun);
--md-code-fg-color: var(--white);
--md-code-bg-color: var(--rock);
--md-code-hl-comment-color: var(--asteroid);
--md-code-hl-punctuation-color: var(--supernova);
--md-code-hl-generic-color: var(--supernova);
--md-code-hl-variable-color: var(--white);
--md-code-hl-string-color: var(--radiate);
--md-code-hl-keyword-color: var(--supernova);
--md-code-hl-operator-color: var(--supernova);
--md-code-hl-number-color: var(--radiate);
--md-code-hl-special-color: var(--supernova);
--md-code-hl-function-color: var(--neutron);
--md-code-hl-constant-color: var(--radiate);
--md-code-hl-name-color: var(--md-code-fg-color);
--md-typeset-del-color: hsla(6, 90%, 60%, 0.15);
--md-typeset-ins-color: hsla(150, 90%, 44%, 0.15);
--md-typeset-table-color: hsla(0, 0%, 100%, 0.12);
--md-typeset-table-color--light: hsla(0, 0%, 100%, 0.035);
}
[data-md-color-scheme="fastrtc-dark"] div.admonition {
color: var(--md-code-fg-color);
background-color: var(--galaxy);
}
[data-md-color-scheme="fastrtc-dark"] .grid.cards>ul>li {
border-color: var(--rock);
border-width: thick;
}
[data-md-color-scheme="fastrtc-dark"] .grid.cards>ul>li>hr {
border-color: var(--rock);
}

144
docs/turn_taking_gallery.md Normal file
View File

@@ -0,0 +1,144 @@
<style>
.tag-button {
cursor: pointer;
opacity: 0.5;
transition: opacity 0.2s ease;
}
.tag-button > code {
color: var(--supernova);
}
.tag-button.active {
opacity: 1;
}
</style>
A collection of Turn Taking Algorithms and Voice Activity Detection (VAD) models ready to use with FastRTC. Click on the tags below to find the model you're looking for!
<div class="tag-buttons">
<button class="tag-button" data-tag="vad-models"><code>VAD Model</code></button>
<button class="tag-button" data-tag="turn-taking-algorithm"><code>Turn-taking Algorithm</code></button>
</div>
<script>
function filterCards() {
const activeButtons = document.querySelectorAll('.tag-button.active');
const selectedTags = Array.from(activeButtons).map(button => button.getAttribute('data-tag'));
const cards = document.querySelectorAll('.grid.cards > ul > li > p[data-tags]');
cards.forEach(card => {
const cardTags = card.getAttribute('data-tags').split(',');
const shouldShow = selectedTags.length === 0 || selectedTags.some(tag => cardTags.includes(tag));
card.parentElement.style.display = shouldShow ? 'block' : 'none';
});
}
document.querySelectorAll('.tag-button').forEach(button => {
button.addEventListener('click', () => {
button.classList.toggle('active');
filterCards();
});
});
</script>
## Gallery
<div class="grid cards" markdown>
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __Walkie Talkie__
{: data-tags="turn-taking-algorithm"}
---
Description
The user's turn ends when they finish a sentence with the word "over".
For example, "Hello, how are you? Over." would send end the user's turn and trigger the response.
This is intended as a simple reference implementation for how to implement a custom-turn-taking algorithm.
Install Instructions
```bash
pip install fastrtc-walkie-talkie
```
<video width=98% src="https://github.com/user-attachments/assets/d94c1b91-5430-48b0-801d-15e17bdad2a0" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://github.com/freddyaboulton/fastrtc-walkie-talkie/blob/main/scratch.py)
[:octicons-code-16: Repository](https://github.com/freddyaboulton/fastrtc-walkie-talkie/blob/main/src/fastrtc_walkie_talkie/__init__.py)
</div>
## What is this for?
By default, FastRTC uses the `ReplyOnPause` class to handle turn-taking. However, you may want to tweak this behavior to better fit your use case.
In this gallery, you can find a collection of turn-taking algorithms and VAD models that you can use to customize the turn-taking behavior to your needs. Each card contains install and usage instructions.
## How to add your own Turn-taking Algorithm or VAD model
### Turn-taking Algorithm
1. Typically you will want to subclass the `ReplyOnPause` class and override the `determine_pause` method.
```python
from fastrtc.reply_on_pause import ReplyOnPause, AppState
class MyTurnTakingAlgorithm(ReplyOnPause):
def determine_pause(self, audio: np.ndarray, sampling_rate: int, state: AppState) -> bool:
return super().determine_pause(audio, sampling_rate, state)
```
2. Then package your class into a pip installable package and publish it to [pypi](https://pypi.org/).
3. Open a [PR](https://github.com/freddyaboulton/fastrtc-walkie-talkie/blob/main/src/fastrtc_walkie_talkie/__init__.py) to add your model to the gallery!
!!! tip "Example Implementation"
See the [Walkie Talkie](https://github.com/freddyaboulton/fastrtc-walkie-talkie/) package for an example implementation of a turn-taking algorithm.
### VAD Model
1. Your model can be implemented in **any** framework you want but it must implement the `PauseDetectionModel` protocol.
```python
ModelOptions: TypeAlias = Any
class PauseDetectionModel(Protocol):
def vad(
self,
audio: tuple[int, NDArray[np.int16] | NDArray[np.float32]],
options: ModelOptions,
) -> tuple[float, list[AudioChunk]]: ...
def warmup(
self,
) -> None: ...
```
* The `vad` method should take a numpy array of audio data and return a tuple of the form `(speech_duration, and list[AudioChunk])` where `speech_duration` is the duration of the human speech in the audio chunk and `AudioChunk` is a dictionary with the following fields: `(start, end)` where `start` and `end` are the start and end times of the human speech in the audio array.
* The `audio` tuple should be of the form `(sample_rate, audio_array)` where `sample_rate` is the sample rate of the audio array and `audio_array` is a numpy array of the audio data. It can be of type `np.int16` or `np.float32`.
* The `warmup` method is optional but recommended to warm up the model when the server starts.
2. Once you have your model implemented, you can use it in the `ReplyOnPause` class by passing in the model and any options you need.
```python
from fastrtc import ReplyOnPause, Stream
from your_model import YourModel
def echo(audio):
yield audio
model = YourModel() # implement the PauseDetectionModel protocol
reply_on_pause = ReplyOnPause(
echo,
model=model,
options=YourModelOptions(),
)
stream = Stream(reply_on_pause, mode="send-receive", modality="audio")
stream.ui.launch()
```
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
!!! tip "Package Naming Convention"
It is recommended to name your package `fastrtc-<package-name>` so developers can easily find it on [pypi](https://pypi.org/search/?q=fastrtc-).

461
docs/userguide/api.md Normal file
View File

@@ -0,0 +1,461 @@
# Connecting via API
Before continuing, select the `modality`, `mode` of your `Stream` and whether you're using `WebRTC` or `WebSocket`s.
<div class="config-selector">
<div class="select-group">
<label for="connection">Connection</label>
<select id="connection" onchange="updateDocs()">
<option value="webrtc">WebRTC</option>
<option value="websocket">WebSocket</option>
</select>
</div>
<div class="select-group">
<label for="modality">Modality</label>
<select id="modality" onchange="updateDocs()">
<option value="audio">Audio</option>
<option value="video">Video</option>
<option value="audio-video">Audio-Video</option>
</select>
</div>
<div class="select-group">
<label for="mode">Mode</label>
<select id="mode" onchange="updateDocs()">
<option value="send-receive">Send-Receive</option>
<option value="receive">Receive</option>
<option value="send">Send</option>
</select>
</div>
</div>
### Sample Code
<div id="docs"></div>
### Message Format
Over both WebRTC and WebSocket, the server can send messages of the following format:
```json
{
"type": `send_input` | `fetch_output` | `stopword` | `error` | `warning` | `log`,
"data": string | object
}
```
- `send_input`: Send any input data for the handler to the server. See [`Additional Inputs`](#additional-inputs) for more details.
- `fetch_output`: An instance of [`AdditionalOutputs`](#additional-outputs) is sent to the server.
- `stopword`: The stopword has been detected. See [`ReplyOnStopWords`](../audio/#reply-on-stopwords) for more details.
- `error`: An error occurred. The `data` will be a string containing the error message.
- `warning`: A warning occurred. The `data` will be a string containing the warning message.
- `log`: A log message. The `data` will be a string containing the log message.
The `ReplyOnPause` handler can also send the following `log` messages.
```json
{
"type": "log",
"data": "pause_detected" | "response_starting"
}
```
!!! tip
When using WebRTC, the messages will be encoded as strings, so parse as JSON before using.
### Additional Inputs
When the `send_input` message is received, update the inputs of your handler however you like by using the `set_input` method of the `Stream` object.
A common pattern is to use a `POST` request to send the updated data. The first argument to the `set_input` method is the `webrtc_id` of the handler.
```python
from pydantic import BaseModel, Field
class InputData(BaseModel):
webrtc_id: str
conf_threshold: float = Field(ge=0, le=1)
@app.post("/input_hook")
async def _(data: InputData):
stream.set_input(data.webrtc_id, data.conf_threshold)
```
The updated data will be passed to the handler on the **next** call.
### Additional Outputs
The `fetch_output` message is sent to the client whenever an instance of [`AdditionalOutputs`](../streams/#additional-outputs) is available. You can access the latest output data by calling the `fetch_latest_output` method of the `Stream` object.
However, rather than fetching each output manually, a common pattern is to fetch the entire stream of output data by calling the `output_stream` method.
Here is an example:
```python
from fastapi.responses import StreamingResponse
@app.get("/updates")
async def stream_updates(webrtc_id: str):
async def output_stream():
async for output in stream.output_stream(webrtc_id):
# Output is the AdditionalOutputs instance
# Be sure to serialize it however you would like
yield f"data: {output.args[0]}\n\n"
return StreamingResponse(
output_stream(),
media_type="text/event-stream"
)
```
### Handling Errors
When connecting via `WebRTC`, the server will respond to the `/webrtc/offer` route with a JSON response. If there are too many connections, the server will respond with a 200 error.
```json
{
"status": "failed",
"meta": {
"error": "concurrency_limit_reached",
"limit": 10
}
```
Over `WebSocket`, the server will send the same message before closing the connection.
!!! tip
The server will sends a 200 status code because otherwise the gradio client will not be able to process the json response and display the error.
<style>
.config-selector {
margin: 1em 0;
display: flex;
gap: 2em;
}
.select-group {
display: flex;
flex-direction: column;
gap: 0.5em;
}
.select-group label {
font-size: 0.8em;
font-weight: 600;
color: var(--md-default-fg-color--light);
}
.select-group select {
padding: 0.5em;
border: 1px solid var(--md-default-fg-color--lighter);
border-radius: 4px;
background-color: var(--md-code-bg-color);
color: var(--md-code-fg-color);
width: 150px;
font-size: 0.9em;
}
/* Style code blocks to match site theme */
.rendered-content pre {
background-color: var(--md-code-bg-color) !important;
color: var(--md-code-fg-color) !important;
padding: 1em;
border-radius: 4px;
}
.rendered-content code {
font-family: var(--md-code-font-family);
background-color: var(--md-code-bg-color) !important;
color: var(--md-code-fg-color) !important;
}
</style>
<script>
// doT.js
// 2011-2014, Laura Doktorova, https://github.com/olado/doT
// Licensed under the MIT license.
var doT = {
name: "doT",
version: "1.1.1",
templateSettings: {
evaluate: /\{\{([\s\S]+?(\}?)+)\}\}/g,
interpolate: /\{\{=([\s\S]+?)\}\}/g,
encode: /\{\{!([\s\S]+?)\}\}/g,
use: /\{\{#([\s\S]+?)\}\}/g,
useParams: /(^|[^\w$])def(?:\.|\[[\'\"])([\w$\.]+)(?:[\'\"]\])?\s*\:\s*([\w$\.]+|\"[^\"]+\"|\'[^\']+\'|\{[^\}]+\})/g,
define: /\{\{##\s*([\w\.$]+)\s*(\:|=)([\s\S]+?)#\}\}/g,
defineParams: /^\s*([\w$]+):([\s\S]+)/,
conditional: /\{\{\?(\?)?\s*([\s\S]*?)\s*\}\}/g,
iterate: /\{\{~\s*(?:\}\}|([\s\S]+?)\s*\:\s*([\w$]+)\s*(?:\:\s*([\w$]+))?\s*\}\})/g,
varname: "it",
strip: false,
append: true,
selfcontained: false,
doNotSkipEncoded: false
},
template: undefined, //fn, compile template
compile: undefined, //fn, for express
log: true
}, _globals;
doT.encodeHTMLSource = function (doNotSkipEncoded) {
var encodeHTMLRules = { "&": "&#38;", "<": "&#60;", ">": "&#62;", '"': "&#34;", "'": "&#39;", "/": "&#47;" },
matchHTML = doNotSkipEncoded ? /[&<>"'\/]/g : /&(?!#?\w+;)|<|>|"|'|\//g;
return function (code) {
return code ? code.toString().replace(matchHTML, function (m) { return encodeHTMLRules[m] || m; }) : "";
};
};
_globals = (function () { return this || (0, eval)("this"); }());
/* istanbul ignore else */
if (typeof module !== "undefined" && module.exports) {
module.exports = doT;
} else if (typeof define === "function" && define.amd) {
define(function () { return doT; });
} else {
_globals.doT = doT;
}
var startend = {
append: { start: "'+(", end: ")+'", startencode: "'+encodeHTML(" },
split: { start: "';out+=(", end: ");out+='", startencode: "';out+=encodeHTML(" }
}, skip = /$^/;
function resolveDefs(c, block, def) {
return ((typeof block === "string") ? block : block.toString())
.replace(c.define || skip, function (m, code, assign, value) {
if (code.indexOf("def.") === 0) {
code = code.substring(4);
}
if (!(code in def)) {
if (assign === ":") {
if (c.defineParams) value.replace(c.defineParams, function (m, param, v) {
def[code] = { arg: param, text: v };
});
if (!(code in def)) def[code] = value;
} else {
new Function("def", "def['" + code + "']=" + value)(def);
}
}
return "";
})
.replace(c.use || skip, function (m, code) {
if (c.useParams) code = code.replace(c.useParams, function (m, s, d, param) {
if (def[d] && def[d].arg && param) {
var rw = (d + ":" + param).replace(/'|\\/g, "_");
def.__exp = def.__exp || {};
def.__exp[rw] = def[d].text.replace(new RegExp("(^|[^\\w$])" + def[d].arg + "([^\\w$])", "g"), "$1" + param + "$2");
return s + "def.__exp['" + rw + "']";
}
});
var v = new Function("def", "return " + code)(def);
return v ? resolveDefs(c, v, def) : v;
});
}
function unescape(code) {
return code.replace(/\\('|\\)/g, "$1").replace(/[\r\t\n]/g, " ");
}
doT.template = function (tmpl, c, def) {
c = c || doT.templateSettings;
var cse = c.append ? startend.append : startend.split, needhtmlencode, sid = 0, indv,
str = (c.use || c.define) ? resolveDefs(c, tmpl, def || {}) : tmpl;
str = ("var out='" + (c.strip ? str.replace(/(^|\r|\n)\t* +| +\t*(\r|\n|$)/g, " ")
.replace(/\r|\n|\t|\/\*[\s\S]*?\*\//g, "") : str)
.replace(/'|\\/g, "\\$&")
.replace(c.interpolate || skip, function (m, code) {
return cse.start + unescape(code) + cse.end;
})
.replace(c.encode || skip, function (m, code) {
needhtmlencode = true;
return cse.startencode + unescape(code) + cse.end;
})
.replace(c.conditional || skip, function (m, elsecase, code) {
return elsecase ?
(code ? "';}else if(" + unescape(code) + "){out+='" : "';}else{out+='") :
(code ? "';if(" + unescape(code) + "){out+='" : "';}out+='");
})
.replace(c.iterate || skip, function (m, iterate, vname, iname) {
if (!iterate) return "';} } out+='";
sid += 1; indv = iname || "i" + sid; iterate = unescape(iterate);
return "';var arr" + sid + "=" + iterate + ";if(arr" + sid + "){var " + vname + "," + indv + "=-1,l" + sid + "=arr" + sid + ".length-1;while(" + indv + "<l" + sid + "){"
+ vname + "=arr" + sid + "[" + indv + "+=1];out+='";
})
.replace(c.evaluate || skip, function (m, code) {
return "';" + unescape(code) + "out+='";
})
+ "';return out;")
.replace(/\n/g, "\\n").replace(/\t/g, '\\t').replace(/\r/g, "\\r")
.replace(/(\s|;|\}|^|\{)out\+='';/g, '$1').replace(/\+''/g, "");
//.replace(/(\s|;|\}|^|\{)out\+=''\+/g,'$1out+=');
if (needhtmlencode) {
if (!c.selfcontained && _globals && !_globals._encodeHTML) _globals._encodeHTML = doT.encodeHTMLSource(c.doNotSkipEncoded);
str = "var encodeHTML = typeof _encodeHTML !== 'undefined' ? _encodeHTML : ("
+ doT.encodeHTMLSource.toString() + "(" + (c.doNotSkipEncoded || '') + "));"
+ str;
}
try {
return new Function(c.varname, str);
} catch (e) {
/* istanbul ignore else */
if (typeof console !== "undefined") console.log("Could not create a template function: " + str);
throw e;
}
};
doT.compile = function (tmpl, def) {
return doT.template(tmpl, null, def);
};
// WebRTC template
const webrtcTemplate = doT.template(`
To connect to the server, you need to create a new RTCPeerConnection object and call the \`setupWebRTC\` function below.
{{? it.mode === "send-receive" || it.mode === "receive" }}
This code snippet assumes there is an html element with an id of \`{{=it.modality}}_output_component_id\` where the output will be displayed. It should be {{? it.modality === "audio"}}a \`<audio>\`{{??}}an \`<video>\`{{?}} element.
{{?}}
\`\`\`javascript
// pass any rtc_configuration params here
const pc = new RTCPeerConnection();
{{? it.mode === "send-receive" || it.mode === "receive" }}
const {{=it.modality}}_output_component = document.getElementById("{{=it.modality}}_output_component_id");
{{?}}
async function setupWebRTC(peerConnection) {
{{? it.mode === "send-receive" || it.mode === "send" }}
// Get {{=it.modality}} stream from webcam
const stream = await navigator.mediaDevices.getUserMedia({
{{=it.modality}}: true,
})
{{?}}
{{? it.mode === "send-receive" }}
// Send {{=it.modality}} stream to server
stream.getTracks().forEach(async (track) => {
const sender = pc.addTrack(track, stream);
})
{{?? it.mode === "send" }}
// Receive {{=it.modality}} stream from server
pc.addTransceiver({{=it.modality}}, { direction: "recvonly" })
{{?}}
{{? it.mode === "send-receive" || it.mode === "receive" }}
peerConnection.addEventListener("track", (evt) => {
if ({{=it.modality}}_output_component &&
{{=it.modality}}_output_component.srcObject !== evt.streams[0]) {
{{=it.modality}}_output_component.srcObject = evt.streams[0];
}
});
{{?}}
// Create data channel (needed!)
const dataChannel = peerConnection.createDataChannel("text");
// Create and send offer
const offer = await peerConnection.createOffer();
await peerConnection.setLocalDescription(offer);
// Send offer to server
const response = await fetch('/webrtc/offer', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
sdp: offer.sdp,
type: offer.type,
webrtc_id: Math.random().toString(36).substring(7)
})
});
// Handle server response
const serverResponse = await response.json();
await peerConnection.setRemoteDescription(serverResponse);
}
\`\`\`
`);
// WebSocket template
const wsTemplate = doT.template(`
{{? it.modality !== "audio" || it.mode !== "send-receive" }}
WebSocket connections are currently only supported for audio in send-receive mode.
{{??}}
To connect to the server via WebSocket, you'll need to establish a WebSocket connection and handle audio processing. The code below assumes there is an HTML audio element for output playback.
\`\`\`javascript
// Setup audio context and stream
const audioContext = new AudioContext();
const stream = await navigator.mediaDevices.getUserMedia({
audio: true
});
// Create WebSocket connection
const ws = new WebSocket(\`\${window.location.protocol === 'https:' ? 'wss:' : 'ws:'}//$\{window.location.host}/websocket/offer\`);
ws.onopen = () => {
// Send initial start message with unique ID
ws.send(JSON.stringify({
event: "start",
websocket_id: generateId() // Implement your own ID generator
}));
// Setup audio processing
const source = audioContext.createMediaStreamSource(stream);
const processor = audioContext.createScriptProcessor(2048, 1, 1);
source.connect(processor);
processor.connect(audioContext.destination);
processor.onaudioprocess = (e) => {
const inputData = e.inputBuffer.getChannelData(0);
const mulawData = convertToMulaw(inputData, audioContext.sampleRate);
const base64Audio = btoa(String.fromCharCode.apply(null, mulawData));
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({
event: "media",
media: {
payload: base64Audio
}
}));
}
};
};
\`\`\`
{{?}}
`);
function updateDocs() {
// Get selected values
const modality = document.getElementById('modality').value;
const mode = document.getElementById('mode').value;
const connection = document.getElementById('connection').value;
// Context for templates
const context = {
modality: modality,
mode: mode,
additional_inputs: true,
additional_outputs: true
};
// Choose template based on connection type
const template = connection === 'webrtc' ? webrtcTemplate : wsTemplate;
// Render docs with syntax highlighting
const html = template(context);
const docsDiv = document.getElementById('docs');
docsDiv.innerHTML = marked.parse(html);
docsDiv.className = 'rendered-content';
// Initialize any code blocks that were just added
document.querySelectorAll('pre code').forEach((block) => {
hljs.highlightElement(block);
});
}
// Initial render
document.addEventListener('DOMContentLoaded', updateDocs);
</script>

View File

@@ -0,0 +1,27 @@
# Audio-Video Streaming
You can simultaneously stream audio and video using `AudioVideoStreamHandler` or `AsyncAudioVideoStreamHandler`.
They are identical to the audio `StreamHandlers` with the addition of `video_receive` and `video_emit` methods which take and return a `numpy` array, respectively.
Here is an example of the video handling functions for connecting with the Gemini multimodal API. In this case, we simply reflect the webcam feed back to the user but every second we'll send the latest webcam frame (and an additional image component) to the Gemini server.
Please see the "Gemini Audio Video Chat" example in the [cookbook](../../cookbook) for the complete code.
``` python title="Async Gemini Video Handling"
async def video_receive(self, frame: np.ndarray):
"""Send video frames to the server"""
if self.session:
# send image every 1 second
# otherwise we flood the API
if time.time() - self.last_frame_time > 1:
self.last_frame_time = time.time()
await self.session.send(encode_image(frame))
if self.latest_args[2] is not None:
await self.session.send(encode_image(self.latest_args[2]))
self.video_queue.put_nowait(frame)
async def video_emit(self) -> VideoEmitType:
"""Return video frames to the client"""
return await self.video_queue.get()
```

388
docs/userguide/audio.md Normal file
View File

@@ -0,0 +1,388 @@
## Reply On Pause
Typically, you want to run a python function whenever a user has stopped speaking. This can be done by wrapping a python generator with the `ReplyOnPause` class and passing it to the `handler` argument of the `Stream` object. The `ReplyOnPause` class will handle the voice detection and turn taking logic automatically!
=== "Code"
```python
from fastrtc import ReplyOnPause, Stream
def response(audio: tuple[int, np.ndarray]): # (1)
sample_rate, audio_array = audio
# Generate response
for audio_chunk in generate_response(sample_rate, audio_array):
yield (sample_rate, audio_chunk) # (2)
stream = Stream(
handler=ReplyOnPause(response),
modality="audio",
mode="send-receive"
)
```
1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components.
2. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio array). Each numpy audio array must have a shape of (1, num_samples).
=== "Notes"
1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components.
2. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio array). Each numpy audio array must have a shape of (1, num_samples).
!!! tip "Asynchronous"
You can also use an async generator with `ReplyOnPause`.
!!! tip "Parameters"
You can customize the voice detection parameters by passing in `algo_options` and `model_options` to the `ReplyOnPause` class.
```python
from fastrtc import AlgoOptions, SileroVadOptions
stream = Stream(
handler=ReplyOnPause(
response,
algo_options=AlgoOptions(
audio_chunk_duration=0.6,
started_talking_threshold=0.2,
speech_threshold=0.1
),
model_options=SileroVadOptions(
threshold=0.5,
min_speech_duration_ms=250,
min_silence_duration_ms=100
)
)
)
```
### Interruptions
By default, the `ReplyOnPause` handler will allow you to interrupt the response at any time by speaking again. If you do not want to allow interruption, you can set the `can_interrupt` parameter to `False`.
```python
from fastrtc import Stream, ReplyOnPause
stream = Stream(
handler=ReplyOnPause(
response,
can_interrupt=True,
)
)
```
<video width=98% src="https://github.com/user-attachments/assets/dba68dd7-7444-439b-b948-59171067e850" controls style="text-align: center"></video>
!!! tip "Muting Response Audio"
You can directly talk over the output audio and the interruption will still work. However, in these cases, the audio transcription may be incorrect. To prevent this, it's best practice to mute the output audio before talking over it.
### Startup Function
You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating intial responses.
```python
from fastrtc import get_tts_model, Stream, ReplyOnPause
tts_client = get_tts_model()
def detection(audio: tuple[int, np.ndarray]):
# Implement any iterator that yields audio
# See "LLM Voice Chat" for a more complete example
yield audio
def startup():
for chunk in tts_client.stream_tts_sync("Welcome to the echo audio demo!"):
yield chunk
stream = Stream(
handler=ReplyOnPause(detection, startup_fn=startup),
modality="audio",
mode="send-receive",
ui_args={"title": "Echo Audio"},
)
```
<video width=98% src="https://github.com/user-attachments/assets/c6b1cb51-5790-4522-80c3-e24e58ef9f11" controls style="text-align: center"></video>
## Reply On Stopwords
You can configure your AI model to run whenever a set of "stop words" are detected, like "Hey Siri" or "computer", with the `ReplyOnStopWords` class.
The API is similar to `ReplyOnPause` with the addition of a `stop_words` parameter.
=== "Code"
``` py
from fastrtc import Stream, ReplyOnStopWords
def response(audio: tuple[int, np.ndarray]):
"""This function must yield audio frames"""
...
for numpy_array in generated_audio:
yield (sampling_rate, numpy_array, "mono")
stream = Stream(
handler=ReplyOnStopWords(generate,
input_sample_rate=16000,
stop_words=["computer"]), # (1)
modality="audio",
mode="send-receive"
)
```
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
=== "Notes"
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
!!! tip "Extra Dependencies"
The `ReplyOnStopWords` class requires the the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
!!! warning "English Only"
The `ReplyOnStopWords` class is currently only supported for English.
## Stream Handler
`ReplyOnPause` and `ReplyOnStopWords` are implementations of a `StreamHandler`. The `StreamHandler` is a low-level abstraction that gives you arbitrary control over how the input audio stream and output audio stream are created. The following example echos back the user audio.
=== "Code"
``` py
import gradio as gr
from gradio_webrtc import WebRTC, StreamHandler
from queue import Queue
class EchoHandler(StreamHandler):
def __init__(self) -> None:
super().__init__()
self.queue = Queue()
def receive(self, frame: tuple[int, np.ndarray]) -> None: # (1)
self.queue.put(frame)
def emit(self) -> None: # (2)
return self.queue.get()
def copy(self) -> StreamHandler:
return EchoHandler()
def shutdown(self) -> None: # (3)
pass
def start_up(self) -> None: # (4)
pass
stream = Stream(
handler=EchoHandler(),
modality="audio",
mode="send-receive"
)
```
1. The `StreamHandler` class implements three methods: `receive`, `emit` and `copy`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client. The `copy` method is called at the beginning of the stream to ensure each user has a unique stream handler.
2. The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return `None`. If you need to wait for a frame, use [`wait_for_item`](../../utils#wait_for_item) from the `utils` module.
3. The `shutdown` method is called when the stream is closed. It should be used to clean up any resources.
4. The `start_up` method is called when the stream is first created. It should be used to initialize any resources. See [Talk To OpenAI](https://huggingface.co/spaces/fastrtc/talk-to-openai-gradio) or [Talk To Gemini](https://huggingface.co/spaces/fastrtc/talk-to-gemini-gradio) for an example of a `StreamHandler` that uses the `start_up` method to connect to an API.
=== "Notes"
1. The `StreamHandler` class implements three methods: `receive`, `emit` and `copy`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client. The `copy` method is called at the beginning of the stream to ensure each user has a unique stream handler.
2. The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return `None`. If you need to wait for a frame, use [`wait_for_item`](../../utils#wait_for_item) from the `utils` module.
3. The `shutdown` method is called when the stream is closed. It should be used to clean up any resources.
4. The `start_up` method is called when the stream is first created. It should be used to initialize any resources. See [Talk To OpenAI](https://huggingface.co/spaces/fastrtc/talk-to-openai-gradio) or [Talk To Gemini](https://huggingface.co/spaces/fastrtc/talk-to-gemini-gradio) for an example of a `StreamHandler` that uses the `start_up` method to connect to an API.
!!! tip
See this [Talk To Gemini](https://huggingface.co/spaces/fastrtc/talk-to-gemini-gradio) for a complete example of a more complex stream handler.
!!! warning
The `emit` method should not block. If you need to wait for a frame, use [`wait_for_item`](../../utils#wait_for_item) from the `utils` module.
## Async Stream Handlers
It is also possible to create asynchronous stream handlers. This is very convenient for accessing async APIs from major LLM developers, like Google and OpenAI. The main difference is that `receive`, `emit`, and `start_up` are now defined with `async def`.
Here is aa simple example of using `AsyncStreamHandler`:
=== "Code"
``` py
from fastrtc import AsyncStreamHandler, wait_for_item, Stream
import asyncio
import numpy as np
class AsyncEchoHandler(AsyncStreamHandler):
"""Simple Async Echo Handler"""
def __init__(self) -> None:
super().__init__(input_sample_rate=24000)
self.queue = asyncio.Queue()
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
await self.queue.put(frame)
async def emit(self) -> None:
return await wait_for_item(self.queue)
def copy(self):
return AsyncEchoHandler()
async def shutdown(self):
pass
async def start_up(self) -> None:
pass
```
!!! tip
See [Talk To Gemini](https://huggingface.co/spaces/fastrtc/talk-to-gemini), [Talk To Openai](https://huggingface.co/spaces/fastrtc/talk-to-openai) for complete examples of `AsyncStreamHandler`s.
## Text To Speech
You can use an on-device text to speech model if you have the `tts` extra installed.
Import the `get_tts_model` function and call it with the model name you want to use.
At the moment, the only model supported is `kokoro`.
The `get_tts_model` function returns an object with three methods:
- `tts`: Synchronous text to speech.
- `stream_tts_sync`: Synchronous text to speech streaming.
- `stream_tts`: Asynchronous text to speech streaming.
```python
from fastrtc import get_tts_model
model = get_tts_model(model="kokoro")
for audio in model.stream_tts_sync("Hello, world!"):
yield audio
async for audio in model.stream_tts("Hello, world!"):
yield audio
audio = model.tts("Hello, world!")
```
!!! tip
You can customize the audio by passing in an instace of `KokoroTTSOptions` to the method.
See [here](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) for a list of available voices.
```python
from fastrtc import KokoroTTSOptions, get_tts_model
model = get_tts_model(model="kokoro")
options = KokoroTTSOptions(
voice="af_heart",
speed=1.0,
lang="en-us"
)
audio = model.tts("Hello, world!", options=options)
```
## Speech To Text
You can use an on-device speech to text model if you have the `stt` or `stopword` extra installed.
Import the `get_stt_model` function and call it with the model name you want to use.
At the moment, the only models supported are `moonshine/base` and `moonshine/tiny`.
The `get_stt_model` function returns an object with the following method:
- `stt`: Synchronous speech to text.
```python
from fastrtc import get_stt_model
model = get_stt_model(model="moonshine/base")
audio = (16000, np.random.randint(-32768, 32768, size=(1, 16000)))
text = model.stt(audio)
```
!!! tip "Example"
See [LLM Voice Chat](https://huggingface.co/spaces/fastrtc/llm-voice-chat) for an example of using the `stt` method in a `ReplyOnPause` handler.
!!! warning "English Only"
The `stt` model is currently only supported for English.
## Requesting Inputs
In `ReplyOnPause` and `ReplyOnStopWords`, any additional input data is automatically passed to your generator. For `StreamHandler`s, you must manually request the input data from the client.
You can do this by calling `await self.wait_for_args()` (for `AsyncStreamHandler`s) in either the `emit` or `receive` methods. For a `StreamHandler`, you can call `self.wait_for_args_sync()`.
We can access the value of this component via the `latest_args` property of the `StreamHandler`. The `latest_args` is a list storing each of the values. The 0th index is the dummy string `__webrtc_value__`.
## Considerations for Telephone Use
In order for your handler to work over the phone, you must make sure that your handler is not expecting any additional input data besides the audio.
If you call `await self.wait_for_args()` your stream will wait forever for the additional input data.
The stream handlers have a `phone_mode` property that is set to `True` if the stream is running over the phone. You can use this property to determine if you should wait for additional input data.
```python
def emit(self):
if self.phone_mode:
self.latest_args = [None]
else:
await self.wait_for_args()
```
### `ReplyOnPause` and telephone use
The generator you pass to `ReplyOnPause` must have default arguments for all arguments except audio.
If you yield `AdditionalOutputs`, they will be passed in as the input arguments to the generator the next time it is called.
!!! tip
See [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) for an example of a `ReplyOnPause` handler that is compatible with telephone usage. Notice how the input chatbot history is yielded as an `AdditionalOutput` on each invocation.
## Telephone Integration
You can integrate a `Stream` with a SIP provider like Twilio to set up your own phone number for your application.
### Setup Process
1. **Create a Twilio Account**: Sign up for a [Twilio](https://login.twilio.com/u/signup) account and purchase a phone number with voice capabilities. With a trial account, only the phone number you used during registration will be able to connect to your `Stream`.
2. **Mount Your Stream**: Add your `Stream` to a FastAPI app using `stream.mount(app)` and run the server.
3. **Configure Twilio Webhook**: Point your Twilio phone number to your webhook URL.
### Configuring Twilio
To configure your Twilio phone number:
1. In your Twilio dashboard, navigate to `Manage` → `TwiML Apps` in the left sidebar
2. Click `Create TwiML App`
3. Set the `Voice URL` to your FastAPI app's URL with `/telephone/incoming` appended (e.g., `https://your-app-url.com/telephone/incoming`)
![Twilio TwiML Apps Navigation](https://github.com/user-attachments/assets/9cd7b7de-d3e6-4fc8-9e50-ffe946d19c73)
![Twilio Voice URL Configuration](https://github.com/user-attachments/assets/b8490e59-9f2c-4bb4-af59-a304100a5eaf)
!!! tip "Local Development with Ngrok"
For local development, use [ngrok](https://ngrok.com/) to expose your local server:
```bash
ngrok http <port>
```
Then set your Twilio Voice URL to `https://your-ngrok-subdomain.ngrok.io/telephone/incoming-call`
### Code Example
Here's a simple example of setting up a Twilio endpoint:
```py
from fastrtc import Stream, ReplyOnPause
from fastapi import FastAPI
def echo(audio):
yield audio
app = FastAPI()
stream = Stream(ReplyOnPause(echo), modality="audio", mode="send-receive")
stream.mount(app)
# run with `uvicorn main:app`
```

96
docs/userguide/gradio.md Normal file
View File

@@ -0,0 +1,96 @@
# Gradio Component
The automatic gradio UI is a great way to test your stream. However, you may want to customize the UI to your liking or simply build a standalone Gradio application.
## The WebRTC Component
To build a standalone Gradio application, you can use the `WebRTC` component and implement the `stream` event.
Similarly to the `Stream` object, you must set the `mode` and `modality` arguments and pass in a `handler`.
In the `stream` event, you pass in your handler as well as the input and output components.
``` py
import gradio as gr
from fastrtc import WebRTC, ReplyOnPause
def response(audio: tuple[int, np.ndarray]):
"""This function must yield audio frames"""
...
yield audio
with gr.Blocks() as demo:
gr.HTML(
"""
<h1 style='text-align: center'>
Chat (Powered by WebRTC ⚡️)
</h1>
"""
)
with gr.Column():
with gr.Group():
audio = WebRTC(
mode="send-receive",
modality="audio",
)
audio.stream(fn=ReplyOnPause(response),
inputs=[audio], outputs=[audio],
time_limit=60)
demo.launch()
```
## Additional Outputs
In order to modify other components from within the WebRTC stream, you must yield an instance of `AdditionalOutputs` and add an `on_additional_outputs` event to the `WebRTC` component.
This is common for displaying a multimodal text/audio conversation in a Chatbot UI.
=== "Code"
``` py title="Additional Outputs"
from fastrtc import AdditionalOutputs, WebRTC
def transcribe(audio: tuple[int, np.ndarray],
transformers_convo: list[dict],
gradio_convo: list[dict]):
response = model.generate(**inputs, max_length=256)
transformers_convo.append({"role": "assistant", "content": response})
gradio_convo.append({"role": "assistant", "content": response})
yield AdditionalOutputs(transformers_convo, gradio_convo) # (1)
with gr.Blocks() as demo:
gr.HTML(
"""
<h1 style='text-align: center'>
Talk to Qwen2Audio (Powered by WebRTC ⚡️)
</h1>
"""
)
transformers_convo = gr.State(value=[])
with gr.Row():
with gr.Column():
audio = WebRTC(
label="Stream",
mode="send", # (2)
modality="audio",
)
with gr.Column():
transcript = gr.Chatbot(label="transcript", type="messages")
audio.stream(ReplyOnPause(transcribe),
inputs=[audio, transformers_convo, transcript],
outputs=[audio], time_limit=90)
audio.on_additional_outputs(lambda s,a: (s,a), # (3)
outputs=[transformers_convo, transcript],
queue=False, show_progress="hidden")
demo.launch()
```
1. Pass your data to `AdditionalOutputs` and yield it.
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
=== "Notes"
1. Pass your data to `AdditionalOutputs` and yield it.
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.

236
docs/userguide/streams.md Normal file
View File

@@ -0,0 +1,236 @@
# Core Concepts
The core of FastRTC is the `Stream` object. It can be used to stream audio, video, or both.
Here's a simple example of creating a video stream that flips the video vertically. We'll use it to explain the core concepts of the `Stream` object. Click on the plus icons to get a link to the relevant section.
```python
from fastrtc import Stream
import gradio as gr
import numpy as np
def detection(image, slider):
return np.flip(image, axis=0)
stream = Stream(
handler=detection, # (1)
modality="video", # (2)
mode="send-receive", # (3)
additional_inputs=[
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3) # (4)
],
additional_outputs=None, # (5)
additional_outputs_handler=None # (6)
)
```
1. See [Handlers](#handlers) for more information.
2. See [Modalities](#modalities) for more information.
3. See [Stream Modes](#stream-modes) for more information.
4. See [Additional Inputs](#additional-inputs) for more information.
5. See [Additional Outputs](#additional-outputs) for more information.
6. See [Additional Outputs Handler](#additional-outputs) for more information.
7. Mount the `Stream` on a `FastAPI` app with `stream.mount(app)` and you can add custom routes to it. See [Custom Routes and Frontend Integration](#custom-routes-and-frontend-integration) for more information.
8. See [Built-in Routes](#built-in-routes) for more information.
Run:
=== "UI"
```py
stream.ui.launch()
```
=== "FastAPI"
```py
app = FastAPI()
stream.mount(app)
# uvicorn app:app --host 0.0.0.0 --port 8000
```
### Stream Modes
FastRTC supports three streaming modes:
- `send-receive`: Bidirectional streaming (default)
- `send`: Client-to-server only
- `receive`: Server-to-client only
### Modalities
FastRTC supports three modalities:
- `video`: Video streaming
- `audio`: Audio streaming
- `audio-video`: Combined audio and video streaming
### Handlers
The `handler` argument is the main argument of the `Stream` object. A handler should be a function or a class that inherits from `StreamHandler` or `AsyncStreamHandler` depending on the modality and mode.
| Modality | send-receive | send | receive |
|----------|--------------|------|----------|
| video | Function that takes a video frame and returns a new video frame | Function that takes a video frame and returns a new frame | Function that takes a video frame and returns a new frame |
| audio | `StreamHandler` or `AsyncStreamHandler` subclass | `StreamHandler` or `AsyncStreamHandler` subclass | Generator yielding audio frames |
| audio-video | `AudioVideoStreamHandler` or `AsyncAudioVideoStreamHandler` subclass | Not Supported Yet | Not Supported Yet |
## Methods
The `Stream` has three main methods:
- `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/). You can change the UI by setting the `ui` property of the `Stream` object. Also see the [Gradio guide](../gradio.md) for building Gradio apss with fastrtc.
- `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.
- `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system or for building a custom UI.
!!! warning
Websocket docs are only available for audio streams. Telephone docs are only available for audio streams in `send-receive` mode.
## Additional Inputs
You can add additional inputs to your stream using the `additional_inputs` argument. These inputs will be displayed in the generated Gradio UI and they will be passed to the handler as additional arguments.
!!! tip
For audio `StreamHandlers`, please read the special [note](../audio#requesting-inputs) on requesting inputs.
In the automatic gradio UI, these inputs will be the same python type corresponding to the Gradio component. In our case, we used a `gr.Slider` as the additional input, so it will be passed as a float. See the [Gradio documentation](https://www.gradio.app/docs/gradio) for a complete list of components and their corresponding types.
### Input Hooks
Outside of the gradio UI, you are free to update the inputs however you like by using the `set_input` method of the `Stream` object.
A common pattern is to use a `POST` request to send the updated data.
```python
from pydantic import BaseModel, Field
from fastapi import FastAPI
class InputData(BaseModel):
webrtc_id: str
conf_threshold: float = Field(ge=0, le=1)
app = FastAPI()
stream.mount(app)
@app.post("/input_hook")
async def _(data: InputData):
stream.set_input(data.webrtc_id, data.conf_threshold)
```
The updated data will be passed to the handler on the **next** call.
## Additional Outputs
You can return additional output from the handler by returning an instance of `AdditionalOutputs` from the handler.
Let's modify our previous example to also return the number of detections in the frame.
```python
from fastrtc import Stream, AdditionalOutputs
import gradio as gr
def detection(image, conf_threshold=0.3):
processed_frame, n_objects = process_frame(image, conf_threshold)
return processed_frame, AdditionalOutputs(n_objects)
stream = Stream(
handler=detection,
modality="video",
mode="send-receive",
additional_inputs=[
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
],
additional_outputs=[gr.Number()], # (5)
additional_outputs_handler=lambda component, n_objects: n_objects
)
```
We added a `gr.Number()` to the additional outputs and we provided an `additional_outputs_handler`.
The `additional_outputs_handler` is **only** needed for the gradio UI. It is a function that takes the current state of the `component` and the instance of `AdditionalOutputs` and returns the updated state of the `component`. In our case, we want to update the `gr.Number()` with the number of detections.
!!! tip
Since the webRTC is very low latency, you probably don't want to return an additional output on each frame.
### Output Hooks
Outside of the gradio UI, you are free to access the output data however you like by calling the `output_stream` method of the `Stream` object.
A common pattern is to use a `GET` request to get a stream of the output data.
```python
from fastapi.responses import StreamingResponse
@app.get("/updates")
async def stream_updates(webrtc_id: str):
async def output_stream():
async for output in stream.output_stream(webrtc_id):
# Output is the AdditionalOutputs instance
# Be sure to serialize it however you would like
yield f"data: {output.args[0]}\n\n"
return StreamingResponse(
output_stream(),
media_type="text/event-stream"
)
```
## Custom Routes and Frontend Integration
You can add custom routes for serving your own frontend or handling additional functionality once you have mounted the stream on a FastAPI app.
```python
from fastapi.responses import HTMLResponse
from fastapi import FastAPI
from fastrtc import Stream
stream = Stream(...)
app = FastAPI()
stream.mount(app)
# Serve a custom frontend
@app.get("/")
async def serve_frontend():
return HTMLResponse(content=open("index.html").read())
```
## Telephone Integration
FastRTC provides built-in telephone support through the `fastphone()` method:
```python
# Launch with a temporary phone number
stream.fastphone(
# Optional: If None, will use the default token in your machine or read from the HF_TOKEN environment variable
token="your_hf_token",
host="127.0.0.1",
port=8000
)
```
This will print out a phone number along with your temporary code you can use to connect to the stream. You are limited to **10 minutes** of calls per calendar month.
!!! warning
See this [section](../audio#telephone-integration) on making sure your stream handler is compatible for telephone usage.
!!! tip
If you don't have a HF token, you can get one [here](https://huggingface.co/settings/tokens).
## Concurrency
1. You can limit the number of concurrent connections by setting the `concurrency_limit` argument.
2. You can limit the amount of time (in seconds) a connection can stay open by setting the `time_limit` argument.
```python
stream = Stream(
handler=handler,
concurrency_limit=10,
time_limit=3600
)
```

57
docs/userguide/video.md Normal file
View File

@@ -0,0 +1,57 @@
# Video Streaming
## Input/Output Streaming
We already saw this example in the [Quickstart](../../#quickstart) and the [Core Concepts](../streams) section.
=== "Code"
``` py title="Input/Output Streaming"
from fastrtc import Stream
import gradio as gr
def detection(image, conf_threshold=0.3): # (1)
processed_frame = process_frame(image, conf_threshold)
return processed_frame # (2)
stream = Stream(
handler=detection,
modality="video",
mode="send-receive", # (3)
additional_inputs=[
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
],
)
```
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
2. The function must return a numpy array. It can take arbitrary values from other components.
3. Set the `modality="video"` and `mode="send-receive"`
=== "Notes"
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
2. The function must return a numpy array. It can take arbitrary values from other components.
3. Set the `modality="video"` and `mode="send-receive"`
## Server-to-Client Only
In this case, we stream from the server to the client so we will write a generator function that yields the next frame from the video (as a numpy array)
and set the `mode="receive"` in the `WebRTC` component.
=== "Code"
``` py title="Server-To-Client"
from fastrtc import Stream
def generation():
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
cap = cv2.VideoCapture(url)
iterating = True
while iterating:
iterating, frame = cap.read()
yield frame
stream = Stream(
handler=generation,
modality="video",
mode="receive"
)
```

View File

@@ -0,0 +1,160 @@
# FastRTC Docs
## Connecting
To connect to the server, you need to create a new RTCPeerConnection object and call the `setupWebRTC` function below.
{% if mode in ["send-receive", "receive"] %}
This code snippet assumes there is an html element with an id of `{{ modality }}_output_component_id` where the output will be displayed. It should be {{ "a `<audio>`" if modality == "audio" else "an `<video>`"}} element.
{% endif %}
```js
// pass any rtc_configuration params here
const pc = new RTCPeerConnection();
{% if mode in ["send-receive", "receive"] %}
const {{modality}}_output_component = document.getElementById("{{modality}}_output_component_id");
{% endif %}
async function setupWebRTC(peerConnection) {
{%- if mode in ["send-receive", "send"] -%}
// Get {{modality}} stream from webcam
const stream = await navigator.mediaDevices.getUserMedia({
{{modality}}: true,
})
{%- endif -%}
{% if mode == "send-receive" %}
// Send {{ self.modality }} stream to server
stream.getTracks().forEach(async (track) => {
const sender = pc.addTrack(track, stream);
})
{% elif mode == "send" %}
// Receive {self.modality} stream from server
pc.addTransceiver({{modality}}, { direction: "recvonly" })
{%- endif -%}
{% if mode in ["send-receive", "receive"] %}
peerConnection.addEventListener("track", (evt) => {
if ({{modality}}_output_component &&
{{modality}}_output_component.srcObject !== evt.streams[0]) {
{{modality}}_output_component.srcObject = evt.streams[0];
}
});
{% endif %}
// Create data channel (needed!)
const dataChannel = peerConnection.createDataChannel("text");
// Create and send offer
const offer = await peerConnection.createOffer();
await peerConnection.setLocalDescription(offer);
// Send offer to server
const response = await fetch('/webrtc/offer', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
sdp: offer.sdp,
type: offer.type,
webrtc_id: Math.random().toString(36).substring(7)
})
});
// Handle server response
const serverResponse = await response.json();
await peerConnection.setRemoteDescription(serverResponse);
}
```
{%if additional_inputs %}
## Sending Input Data
Your python handler can request additional data from the frontend by calling the `fetch_args()` method (see [here](#add docs)).
This will send a `send_input` message over the WebRTC data channel.
Upon receiving this message, you should trigger the `set_input` hook of your stream.
A simple way to do this is with a `POST` request.
```python
@stream.post("/input_hook")
def _(data: PydanticBody):
stream.set_inputs(data.webrtc_id, data.inputs)
```
And then in your client code:
```js
const data_channel = peerConnection.createDataChannel("text");
data_channel.onmessage = (event) => {
event_json = JSON.parse(event.data);
if (event_json.type === "send_input") {
fetch('/input_hook', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: inputs
}
)
};
};
```
The `set_inputs` hook will set the `latest_args` property of your stream to whatever the second argument is.
NOTE: It is completely up to you how you want to call the `set_inputs` hook.
Here we use a `POST` request but you can use a websocket or any other protocol.
{% endif %}
{% if additional_outputs %}
## Fetching Output Data
Your python handler can send additional data to the front end by returning or yielding `AdditionalOutputs(...)`. See the [docs](https://freddyaboulton.github.io/gradio-webrtc/user-guide/#additional-outputs).
Your front end can fetch these outputs by calling the `get_outputs` hook of the `Stream`.
Here is an example using `server-sent-events`:
```python
@stream.get("/outputs")
def _(webrtc_id: str)
async def get_outputs():
while True:
for output in stream.get_output(webrtc_id):
# Serialize to a string prior to this step
yield f"data: {output}\n\n"
await
return StreamingResponse(get_outputs(), media_type="text/event-stream")
```
NOTE: It is completely up to you how you want to call the `get_output` hook.
Here we use a `server-sent-events` but you can use whatever protocol you want!
{% endif %}
## Stopping
You can stop the stream by calling the following function:
```js
function stop(pc) {
// close transceivers
if (pc.getTransceivers) {
pc.getTransceivers().forEach((transceiver) => {
if (transceiver.stop) {
transceiver.stop();
}
});
}
// close local audio / video
if (pc.getSenders()) {
pc.getSenders().forEach((sender) => {
if (sender.track && sender.track.stop) sender.track.stop();
});
}
// close peer connection
setTimeout(() => {
pc.close();
}, 500);
}
```

View File

@@ -0,0 +1,151 @@
# FastRTC WebSocket Docs
{% if modality != "audio" or mode != "send-receive" %}
WebSocket connections are currently only supported for audio in send-receive mode.
{% else %}
## Connecting
To connect to the server via WebSocket, you'll need to establish a WebSocket connection and handle audio processing. The code below assumes there is an HTML audio element for output playback.
```js
// Setup audio context and stream
const audioContext = new AudioContext();
const stream = await navigator.mediaDevices.getUserMedia({
audio: true
});
// Create WebSocket connection
const ws = new WebSocket(`${window.location.protocol === 'https:' ? 'wss:' : 'ws:'}//${window.location.host}/websocket/offer`);
ws.onopen = () => {
// Send initial start message with unique ID
ws.send(JSON.stringify({
event: "start",
websocket_id: generateId() // Implement your own ID generator
}));
// Setup audio processing
const source = audioContext.createMediaStreamSource(stream);
const processor = audioContext.createScriptProcessor(2048, 1, 1);
source.connect(processor);
processor.connect(audioContext.destination);
processor.onaudioprocess = (e) => {
const inputData = e.inputBuffer.getChannelData(0);
const mulawData = convertToMulaw(inputData, audioContext.sampleRate);
const base64Audio = btoa(String.fromCharCode.apply(null, mulawData));
if (ws.readyState === WebSocket.OPEN) {
ws.send(JSON.stringify({
event: "media",
media: {
payload: base64Audio
}
}));
}
};
};
// Handle incoming audio
const outputContext = new AudioContext({ sampleRate: 24000 });
let audioQueue = [];
let isPlaying = false;
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.event === "media") {
// Process received audio
const audioData = atob(data.media.payload);
const mulawData = new Uint8Array(audioData.length);
for (let i = 0; i < audioData.length; i++) {
mulawData[i] = audioData.charCodeAt(i);
}
// Convert mu-law to linear PCM
const linearData = alawmulaw.mulaw.decode(mulawData);
const audioBuffer = outputContext.createBuffer(1, linearData.length, 24000);
const channelData = audioBuffer.getChannelData(0);
for (let i = 0; i < linearData.length; i++) {
channelData[i] = linearData[i] / 32768.0;
}
audioQueue.push(audioBuffer);
if (!isPlaying) {
playNextBuffer();
}
}
};
function playNextBuffer() {
if (audioQueue.length === 0) {
isPlaying = false;
return;
}
isPlaying = true;
const bufferSource = outputContext.createBufferSource();
bufferSource.buffer = audioQueue.shift();
bufferSource.connect(outputContext.destination);
bufferSource.onended = playNextBuffer;
bufferSource.start();
}
```
Note: This implementation requires the `alawmulaw` library for audio encoding/decoding:
```html
<script src="https://cdn.jsdelivr.net/npm/alawmulaw"></script>
```
## Handling Input Requests
When the server requests additional input data, it will send a `send_input` message over the WebSocket. You should handle this by sending the data to your input hook:
```js
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
// Handle send_input messages
if (data?.type === "send_input") {
fetch('/input_hook', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
webrtc_id: websocket_id, // Use the same ID from connection
inputs: your_input_data
})
});
}
// ... existing audio handling code ...
};
```
## Receiving Additional Outputs
To receive additional outputs from the server, you can use Server-Sent Events (SSE):
```js
const eventSource = new EventSource('/outputs?webrtc_id=' + websocket_id);
eventSource.addEventListener("output", (event) => {
const eventJson = JSON.parse(event.data);
// Handle the output data here
console.log("Received output:", eventJson);
});
```
## Stopping
To stop the WebSocket connection:
```js
function stop(ws) {
if (ws) {
ws.send(JSON.stringify({
event: "stop"
}));
ws.close();
}
}
```
{% endif %}

View File

@@ -6,6 +6,7 @@ Convert an audio tuple containing sample rate and numpy array data into bytes.
Useful for sending data to external APIs from `ReplyOnPause` handler.
Parameters
```
audio : tuple[int, np.ndarray]
A tuple containing:
@@ -14,12 +15,14 @@ audio : tuple[int, np.ndarray]
```
Returns
```
bytes
The audio data encoded as bytes, suitable for transmission or storage
```
Example
```python
>>> sample_rate = 44100
>>> audio_data = np.array([0.1, -0.2, 0.3]) # Example audio samples
@@ -32,23 +35,112 @@ Example
Save an audio tuple containing sample rate and numpy array data to a file.
Parameters
```
audio : tuple[int, np.ndarray]
A tuple containing:
- sample_rate (int): The audio sample rate in Hz
- data (np.ndarray): The audio data as a numpy array
```
Returns
```
str
The path to the saved audio file
```
Example
```
````
```python
>>> sample_rate = 44100
>>> audio_data = np.array([0.1, -0.2, 0.3]) # Example audio samples
>>> audio_tuple = (sample_rate, audio_data)
>>> file_path = audio_to_file(audio_tuple)
>>> print(f"Audio saved to: {file_path}")
```
````
## `aggregate_bytes_to_16bit`
Aggregate bytes to 16-bit audio samples.
This function takes an iterator of chunks and aggregates them into 16-bit audio samples.
It handles incomplete samples and combines them with the next chunk.
Parameters
```
chunks_iterator : Iterator[bytes]
An iterator of byte chunks to aggregate
```
Returns
```
Iterator[NDArray[np.int16]]
An iterator of 16-bit audio samples
```
Example
```python
>>> chunks_iterator = [b'\x00\x01', b'\x02\x03', b'\x04\x05']
>>> for chunk in aggregate_bytes_to_16bit(chunks_iterator):
>>> print(chunk)
```
## `async_aggregate_bytes_to_16bit`
Aggregate bytes to 16-bit audio samples asynchronously.
Parameters
```
chunks_iterator : Iterator[bytes]
An iterator of byte chunks to aggregate
```
Returns
```
Iterator[NDArray[np.int16]]
An iterator of 16-bit audio samples
```
Example
```python
>>> chunks_iterator = [b'\x00\x01', b'\x02\x03', b'\x04\x05']
>>> for chunk in async_aggregate_bytes_to_16bit(chunks_iterator):
>>> print(chunk)
```
## `wait_for_item`
Wait for an item from an asyncio.Queue with a timeout.
Parameters
```
queue : asyncio.Queue
The queue to wait for an item from
timeout : float
The timeout in seconds
```
Returns
```
Any
The item from the queue or None if the timeout is reached
```
Example
```python
>>> queue = asyncio.Queue()
>>> queue.put_nowait(1)
>>> item = await wait_for_item(queue)
>>> print(item)
```