# FastRTC ### The Real-Time Communication Library for Python. Turn any python function into a real-time audio and video stream over WebRTC or WebSockets. [](https://github.com/user-attachments/assets/a297aa1e-ff42-448c-a58c-389b0a575d4d) ## Installation ```bash pip install fastrtc ``` to use built-in pause detection (see [ReplyOnPause](userguide/audio/#reply-on-pause)), speech-to-text (see [Speech To Text](userguide/audio/#speech-to-text)), and text to speech (see [Text To Speech](userguide/audio/#text-to-speech)), install the `vad`, `stt`, and `tts` extras: ```bash pip install "fastrtc[vad, stt, tts]" ``` ## Quickstart Import the [Stream](userguide/streams) class and pass in a [handler](userguide/streams/#handlers). The `Stream` has three main methods: - `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/). - `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required. - `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system. ```python from fastrtc import Stream, ReplyOnPause import numpy as np def echo(audio: tuple[int, np.ndarray]): # The function will be passed the audio until the user pauses # Implement any iterator that yields audio # See "LLM Voice Chat" for a more complete example yield audio stream = Stream( handler=ReplyOnPause(echo), modality="audio", mode="send-receive", ) ``` ```py import os from fastrtc import (ReplyOnPause, Stream, get_stt_model, get_tts_model) from openai import OpenAI sambanova_client = OpenAI( api_key=os.getenv("SAMBANOVA_API_KEY"), base_url="https://api.sambanova.ai/v1" ) stt_model = get_stt_model() tts_model = get_tts_model() def echo(audio): prompt = stt_model.stt(audio) response = sambanova_client.chat.completions.create( model="Meta-Llama-3.2-3B-Instruct", messages=[{"role": "user", "content": prompt}], max_tokens=200, ) prompt = response.choices[0].message.content for audio_chunk in tts_model.stream_tts_sync(prompt): yield audio_chunk stream = Stream(ReplyOnPause(echo), modality="audio", mode="send-receive") ``` ```python from fastrtc import Stream import numpy as np def flip_vertically(image): return np.flip(image, axis=0) stream = Stream( handler=flip_vertically, modality="video", mode="send-receive", ) ``` ```python from fastrtc import Stream import gradio as gr import cv2 from huggingface_hub import hf_hub_download from .inference import YOLOv10 model_file = hf_hub_download( repo_id="onnx-community/yolov10n", filename="onnx/model.onnx" ) # git clone https://huggingface.co/spaces/fastrtc/object-detection # for YOLOv10 implementation model = YOLOv10(model_file) def detection(image, conf_threshold=0.3): image = cv2.resize(image, (model.input_width, model.input_height)) new_image = model.detect_objects(image, conf_threshold) return cv2.resize(new_image, (500, 500)) stream = Stream( handler=detection, modality="video", mode="send-receive", additional_inputs=[ gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3) ] ) ``` Run: ```py stream.ui.launch() ``` ```py stream.fastphone() ``` ```py app = FastAPI() stream.mount(app) # Optional: Add routes @app.get("/") async def _(): return HTMLResponse(content=open("index.html").read()) # uvicorn app:app --host 0.0.0.0 --port 8000 ``` Learn more about the [Stream](userguide/streams) in the user guide. ## Key Features Automatic UI - Use the `.ui.launch()` method to launch the webRTC-enabled built-in Gradio UI. Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend! Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend! ## Examples See the [cookbook](/cookbook). Follow and join our [organization](https://huggingface.co/fastrtc) on Hugging Face! # Connecting via API Before continuing, select the `modality`, `mode` of your `Stream` and whether you're using `WebRTC` or `WebSocket`s. Connection WebRTC WebSocket Modality Audio Video Audio-Video Mode Send-Receive Receive Send ### Sample Code ### Message Format Over both WebRTC and WebSocket, the server can send messages of the following format: ```json { "type": `send_input` | `fetch_output` | `stopword` | `error` | `warning` | `log`, "data": string | object } ``` - `send_input`: Send any input data for the handler to the server. See [`Additional Inputs`](#additional-inputs) for more details. - `fetch_output`: An instance of [`AdditionalOutputs`](#additional-outputs) is sent to the server. - `stopword`: The stopword has been detected. See [`ReplyOnStopWords`](../audio/#reply-on-stopwords) for more details. - `error`: An error occurred. The `data` will be a string containing the error message. - `warning`: A warning occurred. The `data` will be a string containing the warning message. - `log`: A log message. The `data` will be a string containing the log message. The `ReplyOnPause` handler can also send the following `log` messages. ```json { "type": "log", "data": "pause_detected" | "response_starting" | "started_talking" } ``` Tip When using WebRTC, the messages will be encoded as strings, so parse as JSON before using. ### Additional Inputs When the `send_input` message is received, update the inputs of your handler however you like by using the `set_input` method of the `Stream` object. A common pattern is to use a `POST` request to send the updated data. The first argument to the `set_input` method is the `webrtc_id` of the handler. ```python from pydantic import BaseModel, Field class InputData(BaseModel): webrtc_id: str conf_threshold: float = Field(ge=0, le=1) @app.post("/input_hook") async def _(data: InputData): stream.set_input(data.webrtc_id, data.conf_threshold) ``` The updated data will be passed to the handler on the **next** call. ### Additional Outputs The `fetch_output` message is sent to the client whenever an instance of [`AdditionalOutputs`](../streams/#additional-outputs) is available. You can access the latest output data by calling the `fetch_latest_output` method of the `Stream` object. However, rather than fetching each output manually, a common pattern is to fetch the entire stream of output data by calling the `output_stream` method. Here is an example: ```python from fastapi.responses import StreamingResponse @app.get("/updates") async def stream_updates(webrtc_id: str): async def output_stream(): async for output in stream.output_stream(webrtc_id): # Output is the AdditionalOutputs instance # Be sure to serialize it however you would like yield f"data: {output.args[0]}\n\n" return StreamingResponse( output_stream(), media_type="text/event-stream" ) ``` ### Handling Errors When connecting via `WebRTC`, the server will respond to the `/webrtc/offer` route with a JSON response. If there are too many connections, the server will respond with a 200 error. ```json { "status": "failed", "meta": { "error": "concurrency_limit_reached", "limit": 10 } ``` Over `WebSocket`, the server will send the same message before closing the connection. Tip The server will sends a 200 status code because otherwise the gradio client will not be able to process the json response and display the error. # Audio-Video Streaming You can simultaneously stream audio and video using `AudioVideoStreamHandler` or `AsyncAudioVideoStreamHandler`. They are identical to the audio `StreamHandlers` with the addition of `video_receive` and `video_emit` methods which take and return a `numpy` array, respectively. Here is an example of the video handling functions for connecting with the Gemini multimodal API. In this case, we simply reflect the webcam feed back to the user but every second we'll send the latest webcam frame (and an additional image component) to the Gemini server. Please see the "Gemini Audio Video Chat" example in the [cookbook](../../cookbook) for the complete code. Async Gemini Video Handling ```python async def video_receive(self, frame: np.ndarray): """Send video frames to the server""" if self.session: # send image every 1 second # otherwise we flood the API if time.time() - self.last_frame_time > 1: self.last_frame_time = time.time() await self.session.send(encode_image(frame)) if self.latest_args[2] is not None: await self.session.send(encode_image(self.latest_args[2])) self.video_queue.put_nowait(frame) async def video_emit(self) -> VideoEmitType: """Return video frames to the client""" return await self.video_queue.get() ``` ## Reply On Pause Typically, you want to run a python function whenever a user has stopped speaking. This can be done by wrapping a python generator with the `ReplyOnPause` class and passing it to the `handler` argument of the `Stream` object. The `ReplyOnPause` class will handle the voice detection and turn taking logic automatically! ```python from fastrtc import ReplyOnPause, Stream def response(audio: tuple[int, np.ndarray]): # (1) sample_rate, audio_array = audio # Generate response for audio_chunk in generate_response(sample_rate, audio_array): yield (sample_rate, audio_chunk) # (2) stream = Stream( handler=ReplyOnPause(response), modality="audio", mode="send-receive" ) ``` 1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components. 1. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio array). Each numpy audio array must have a shape of (1, num_samples). 1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components. 1. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio array). Each numpy audio array must have a shape of (1, num_samples). Asynchronous You can also use an async generator with `ReplyOnPause`. Parameters You can customize the voice detection parameters by passing in `algo_options` and `model_options` to the `ReplyOnPause` class. ```python from fastrtc import AlgoOptions, SileroVadOptions stream = Stream( handler=ReplyOnPause( response, algo_options=AlgoOptions( audio_chunk_duration=0.6, started_talking_threshold=0.2, speech_threshold=0.1 ), model_options=SileroVadOptions( threshold=0.5, min_speech_duration_ms=250, min_silence_duration_ms=100 ) ) ) ``` ### Interruptions By default, the `ReplyOnPause` handler will allow you to interrupt the response at any time by speaking again. If you do not want to allow interruption, you can set the `can_interrupt` parameter to `False`. ```python from fastrtc import Stream, ReplyOnPause stream = Stream( handler=ReplyOnPause( response, can_interrupt=True, ) ) ``` [](https://github.com/user-attachments/assets/dba68dd7-7444-439b-b948-59171067e850) Muting Response Audio You can directly talk over the output audio and the interruption will still work. However, in these cases, the audio transcription may be incorrect. To prevent this, it's best practice to mute the output audio before talking over it. ### Startup Function You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating initial responses. ```python from fastrtc import get_tts_model, Stream, ReplyOnPause tts_client = get_tts_model() def echo(audio: tuple[int, np.ndarray]): # Implement any iterator that yields audio # See "LLM Voice Chat" for a more complete example yield audio def startup(): for chunk in tts_client.stream_tts_sync("Welcome to the echo audio demo!"): yield chunk stream = Stream( handler=ReplyOnPause(echo, startup_fn=startup), modality="audio", mode="send-receive", ui_args={"title": "Echo Audio"}, ) ``` [](https://github.com/user-attachments/assets/c6b1cb51-5790-4522-80c3-e24e58ef9f11) ## Reply On Stopwords You can configure your AI model to run whenever a set of "stop words" are detected, like "Hey Siri" or "computer", with the `ReplyOnStopWords` class. The API is similar to `ReplyOnPause` with the addition of a `stop_words` parameter. ```py from fastrtc import Stream, ReplyOnStopWords def response(audio: tuple[int, np.ndarray]): """This function must yield audio frames""" ... for numpy_array in generated_audio: yield (sampling_rate, numpy_array, "mono") stream = Stream( handler=ReplyOnStopWords(generate, input_sample_rate=16000, stop_words=["computer"]), # (1) modality="audio", mode="send-receive" ) ``` 1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris". 1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris". Extra Dependencies The `ReplyOnStopWords` class requires the `stopword` extra. Run `pip install fastrtc[stopword]` to install it. English Only The `ReplyOnStopWords` class is currently only supported for English. ## Stream Handler `ReplyOnPause` and `ReplyOnStopWords` are implementations of a `StreamHandler`. The `StreamHandler` is a low-level abstraction that gives you arbitrary control over how the input audio stream and output audio stream are created. The following example echos back the user audio. ```py import gradio as gr from fastrtc import StreamHandler from queue import Queue class EchoHandler(StreamHandler): def __init__(self) -> None: super().__init__() self.queue = Queue() def receive(self, frame: tuple[int, np.ndarray]) -> None: # (1) self.queue.put(frame) def emit(self) -> None: # (2) return self.queue.get() def copy(self) -> StreamHandler: return EchoHandler() def shutdown(self) -> None: # (3) pass def start_up(self) -> None: # (4) pass stream = Stream( handler=EchoHandler(), modality="audio", mode="send-receive" ) ``` 1. The `StreamHandler` class implements three methods: `receive`, `emit` and `copy`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client. The `copy` method is called at the beginning of the stream to ensure each user has a unique stream handler. 1. The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return `None`. If you need to wait for a frame, use [`wait_for_item`](../../utils#wait_for_item) from the `utils` module. 1. The `shutdown` method is called when the stream is closed. It should be used to clean up any resources. 1. The `start_up` method is called when the stream is first created. It should be used to initialize any resources. See [Talk To OpenAI](https://huggingface.co/spaces/fastrtc/talk-to-openai-gradio) or [Talk To Gemini](https://huggingface.co/spaces/fastrtc/talk-to-gemini-gradio) for an example of a `StreamHandler` that uses the `start_up` method to connect to an API. 1. The `StreamHandler` class implements three methods: `receive`, `emit` and `copy`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client. The `copy` method is called at the beginning of the stream to ensure each user has a unique stream handler. 1. The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return `None`. If you need to wait for a frame, use [`wait_for_item`](../../utils#wait_for_item) from the `utils` module. 1. The `shutdown` method is called when the stream is closed. It should be used to clean up any resources. 1. The `start_up` method is called when the stream is first created. It should be used to initialize any resources. See [Talk To OpenAI](https://huggingface.co/spaces/fastrtc/talk-to-openai-gradio) or [Talk To Gemini](https://huggingface.co/spaces/fastrtc/talk-to-gemini-gradio) for an example of a `StreamHandler` that uses the `start_up` method to connect to an API. Tip See this [Talk To Gemini](https://huggingface.co/spaces/fastrtc/talk-to-gemini-gradio) for a complete example of a more complex stream handler. Warning The `emit` method should not block. If you need to wait for a frame, use [`wait_for_item`](../../utils#wait_for_item) from the `utils` module. ## Async Stream Handlers It is also possible to create asynchronous stream handlers. This is very convenient for accessing async APIs from major LLM developers, like Google and OpenAI. The main difference is that `receive`, `emit`, and `start_up` are now defined with `async def`. Here is a simple example of using `AsyncStreamHandler`: ```py from fastrtc import AsyncStreamHandler, wait_for_item, Stream import asyncio import numpy as np class AsyncEchoHandler(AsyncStreamHandler): """Simple Async Echo Handler""" def __init__(self) -> None: super().__init__(input_sample_rate=24000) self.queue = asyncio.Queue() async def receive(self, frame: tuple[int, np.ndarray]) -> None: await self.queue.put(frame) async def emit(self) -> None: return await wait_for_item(self.queue) def copy(self): return AsyncEchoHandler() async def shutdown(self): pass async def start_up(self) -> None: pass ``` Tip See [Talk To Gemini](https://huggingface.co/spaces/fastrtc/talk-to-gemini), [Talk To Openai](https://huggingface.co/spaces/fastrtc/talk-to-openai) for complete examples of `AsyncStreamHandler`s. ## Text To Speech You can use an on-device text to speech model if you have the `tts` extra installed. Import the `get_tts_model` function and call it with the model name you want to use. At the moment, the only model supported is `kokoro`. The `get_tts_model` function returns an object with three methods: - `tts`: Synchronous text to speech. - `stream_tts_sync`: Synchronous text to speech streaming. - `stream_tts`: Asynchronous text to speech streaming. ```python from fastrtc import get_tts_model model = get_tts_model(model="kokoro") for audio in model.stream_tts_sync("Hello, world!"): yield audio async for audio in model.stream_tts("Hello, world!"): yield audio audio = model.tts("Hello, world!") ``` Tip You can customize the audio by passing in an instance of `KokoroTTSOptions` to the method. See [here](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) for a list of available voices. ```python from fastrtc import KokoroTTSOptions, get_tts_model model = get_tts_model(model="kokoro") options = KokoroTTSOptions( voice="af_heart", speed=1.0, lang="en-us" ) audio = model.tts("Hello, world!", options=options) ``` ## Speech To Text You can use an on-device speech to text model if you have the `stt` or `stopword` extra installed. Import the `get_stt_model` function and call it with the model name you want to use. At the moment, the only models supported are `moonshine/base` and `moonshine/tiny`. The `get_stt_model` function returns an object with the following method: - `stt`: Synchronous speech to text. ```python from fastrtc import get_stt_model model = get_stt_model(model="moonshine/base") audio = (16000, np.random.randint(-32768, 32768, size=(1, 16000))) text = model.stt(audio) ``` Example See [LLM Voice Chat](https://huggingface.co/spaces/fastrtc/llm-voice-chat) for an example of using the `stt` method in a `ReplyOnPause` handler. English Only The `stt` model is currently only supported for English. ## Requesting Inputs In `ReplyOnPause` and `ReplyOnStopWords`, any additional input data is automatically passed to your generator. For `StreamHandler`s, you must manually request the input data from the client. You can do this by calling `await self.wait_for_args()` (for `AsyncStreamHandler`s) in either the `emit` or `receive` methods. For a `StreamHandler`, you can call `self.wait_for_args_sync()`. We can access the value of this component via the `latest_args` property of the `StreamHandler`. The `latest_args` is a list storing each of the values. The 0th index is the dummy string `__webrtc_value__`. ## Considerations for Telephone Use In order for your handler to work over the phone, you must make sure that your handler is not expecting any additional input data besides the audio. If you call `await self.wait_for_args()` your stream will wait forever for the additional input data. The stream handlers have a `phone_mode` property that is set to `True` if the stream is running over the phone. You can use this property to determine if you should wait for additional input data. ```python def emit(self): if self.phone_mode: self.latest_args = [None] else: await self.wait_for_args() ``` ### `ReplyOnPause` and telephone use The generator you pass to `ReplyOnPause` must have default arguments for all arguments except audio. If you yield `AdditionalOutputs`, they will be passed in as the input arguments to the generator the next time it is called. Tip See [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) for an example of a `ReplyOnPause` handler that is compatible with telephone usage. Notice how the input chatbot history is yielded as an `AdditionalOutput` on each invocation. ## Telephone Integration You can integrate a `Stream` with a SIP provider like Twilio to set up your own phone number for your application. ### Setup Process 1. **Create a Twilio Account**: Sign up for a [Twilio](https://login.twilio.com/u/signup) account and purchase a phone number with voice capabilities. With a trial account, only the phone number you used during registration will be able to connect to your `Stream`. 1. **Mount Your Stream**: Add your `Stream` to a FastAPI app using `stream.mount(app)` and run the server. 1. **Configure Twilio Webhook**: Point your Twilio phone number to your webhook URL. ### Configuring Twilio To configure your Twilio phone number: 1. In your Twilio dashboard, navigate to `Manage` → `TwiML Apps` in the left sidebar 1. Click `Create TwiML App` 1. Set the `Voice URL` to your FastAPI app's URL with `/telephone/incoming` appended (e.g., `https://your-app-url.com/telephone/incoming`) Local Development with Ngrok For local development, use [ngrok](https://ngrok.com/) to expose your local server: ```bash ngrok http ``` Then set your Twilio Voice URL to `https://your-ngrok-subdomain.ngrok.io/telephone/incoming-call` ### Code Example Here's a simple example of setting up a Twilio endpoint: ```py from fastrtc import Stream, ReplyOnPause from fastapi import FastAPI def echo(audio): yield audio app = FastAPI() stream = Stream(ReplyOnPause(echo), modality="audio", mode="send-receive") stream.mount(app) # run with `uvicorn main:app` ``` ### Outbound calls with Twilio Here's a simple example to call someone using the twilio-python module: ```py app = FastAPI() @app.post("/call") async def start_call(req: Request): body = await req.json() from_no = body.get("from") to_no = body.get("to") account_sid = os.getenv("TWILIO_ACCOUNT_SID") auth_token = os.getenv("TWILIO_AUTH_TOKEN") client = Client(account_sid, auth_token) # Use the public URL of your application # here we're using ngrok to expose an app # running locally call = client.calls.create( to=to_no, from_=from_no, url="https://[your_ngrok_subdomain].ngrok.app/incoming-call" ) return {"sid": f"{call.sid}"} @app.api_route("/incoming-call", methods=["GET", "POST"]) async def handle_incoming_call(req: Request): from twilio.twiml.voice_response import VoiceResponse, Connect response = VoiceResponse() response.say("Connecting to AI assistant") connect = Connect() connect.stream(url=f'wss://{req.url.hostname}/media-stream') response.append(connect) return HTMLResponse(content=str(response), media_type="application/xml") @app.websocket("/media-stream") async def handle_media_stream(websocket: WebSocket): # stream is a FastRTC stream defined elsewhere await stream.telephone_handler(websocket) app = gr.mount_gradio_app(app, stream.ui, path="/") ``` # Gradio Component The automatic gradio UI is a great way to test your stream. However, you may want to customize the UI to your liking or simply build a standalone Gradio application. ## The WebRTC Component To build a standalone Gradio application, you can use the `WebRTC` component and implement the `stream` event. Similarly to the `Stream` object, you must set the `mode` and `modality` arguments and pass in a `handler`. In the `stream` event, you pass in your handler as well as the input and output components. ```py import gradio as gr from fastrtc import WebRTC, ReplyOnPause def response(audio: tuple[int, np.ndarray]): """This function must yield audio frames""" ... yield audio with gr.Blocks() as demo: gr.HTML( """

Chat (Powered by WebRTC ⚡️)

""" ) with gr.Column(): with gr.Group(): audio = WebRTC( mode="send-receive", modality="audio", ) audio.stream(fn=ReplyOnPause(response), inputs=[audio], outputs=[audio], time_limit=60) demo.launch() ``` ## Additional Outputs In order to modify other components from within the WebRTC stream, you must yield an instance of `AdditionalOutputs` and add an `on_additional_outputs` event to the `WebRTC` component. This is common for displaying a multimodal text/audio conversation in a Chatbot UI. Additional Outputs ```py from fastrtc import AdditionalOutputs, WebRTC def transcribe(audio: tuple[int, np.ndarray], transformers_convo: list[dict], gradio_convo: list[dict]): response = model.generate(**inputs, max_length=256) transformers_convo.append({"role": "assistant", "content": response}) gradio_convo.append({"role": "assistant", "content": response}) yield AdditionalOutputs(transformers_convo, gradio_convo) # (1) with gr.Blocks() as demo: gr.HTML( """

Talk to Qwen2Audio (Powered by WebRTC ⚡️)

""" ) transformers_convo = gr.State(value=[]) with gr.Row(): with gr.Column(): audio = WebRTC( label="Stream", mode="send", # (2) modality="audio", ) with gr.Column(): transcript = gr.Chatbot(label="transcript", type="messages") audio.stream(ReplyOnPause(transcribe), inputs=[audio, transformers_convo, transcript], outputs=[audio], time_limit=90) audio.on_additional_outputs(lambda s,a: (s,a), # (3) outputs=[transformers_convo, transcript], queue=False, show_progress="hidden") demo.launch() ``` 1. Pass your data to `AdditionalOutputs` and yield it. 1. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`. 1. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update. 1. Pass your data to `AdditionalOutputs` and yield it. 1. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`. 1. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update. ## Integrated Textbox For audio usecases, you may want to allow your users to type or speak. You can set the `variant="textbox"` argument in the WebRTC component to place a Textbox with a microphone input in the UI. See the `Integrated Textbox` demo in the cookbook or in the `demo` directory of the github repository. `py webrtc = WebRTC( modality="audio", mode="send-receive", variant="textbox", )` Stream Class To use the "textbox" variant via the `Stream` class, set it in the `UIArgs` class and pass it to the stream via the `ui_args` parameter. [](https://github.com/user-attachments/assets/35c982a1-4a58-4947-af89-7ff287070ef5) # Core Concepts The core of FastRTC is the `Stream` object. It can be used to stream audio, video, or both. Here's a simple example of creating a video stream that flips the video vertically. We'll use it to explain the core concepts of the `Stream` object. Click on the plus icons to get a link to the relevant section. ```python from fastrtc import Stream import gradio as gr import numpy as np def detection(image, slider): return np.flip(image, axis=0) stream = Stream( handler=detection, # (1) modality="video", # (2) mode="send-receive", # (3) additional_inputs=[ gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3) # (4) ], additional_outputs=None, # (5) additional_outputs_handler=None # (6) ) ``` 1. See [Handlers](#handlers) for more information. 1. See [Modalities](#modalities) for more information. 1. See [Stream Modes](#stream-modes) for more information. 1. See [Additional Inputs](#additional-inputs) for more information. 1. See [Additional Outputs](#additional-outputs) for more information. 1. See [Additional Outputs Handler](#additional-outputs) for more information. 1. Mount the `Stream` on a `FastAPI` app with `stream.mount(app)` and you can add custom routes to it. See [Custom Routes and Frontend Integration](#custom-routes-and-frontend-integration) for more information. 1. See [Built-in Routes](#built-in-routes) for more information. Run: ```py stream.ui.launch() ``` ```py app = FastAPI() stream.mount(app) # uvicorn app:app --host 0.0.0.0 --port 8000 ``` ### Stream Modes FastRTC supports three streaming modes: - `send-receive`: Bidirectional streaming (default) - `send`: Client-to-server only - `receive`: Server-to-client only ### Modalities FastRTC supports three modalities: - `video`: Video streaming - `audio`: Audio streaming - `audio-video`: Combined audio and video streaming ### Handlers The `handler` argument is the main argument of the `Stream` object. A handler should be a function or a class that inherits from `StreamHandler` or `AsyncStreamHandler` depending on the modality and mode. | Modality | send-receive | send | receive | | --- | --- | --- | --- | | video | Function that takes a video frame and returns a new video frame | Function that takes a video frame and returns a new frame | Function that takes a video frame and returns a new frame | | audio | `StreamHandler` or `AsyncStreamHandler` subclass | `StreamHandler` or `AsyncStreamHandler` subclass | Generator yielding audio frames | | audio-video | `AudioVideoStreamHandler` or `AsyncAudioVideoStreamHandler` subclass | Not Supported Yet | Not Supported Yet | ## Methods The `Stream` has three main methods: - `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/). You can change the UI by setting the `ui` property of the `Stream` object. Also see the [Gradio guide](../gradio.md) for building Gradio apss with fastrtc. - `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required. - `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system or for building a custom UI. Warning Websocket docs are only available for audio streams. Telephone docs are only available for audio streams in `send-receive` mode. ## Additional Inputs You can add additional inputs to your stream using the `additional_inputs` argument. These inputs will be displayed in the generated Gradio UI and they will be passed to the handler as additional arguments. Tip For audio `StreamHandlers`, please read the special [note](../audio#requesting-inputs) on requesting inputs. In the automatic gradio UI, these inputs will be the same python type corresponding to the Gradio component. In our case, we used a `gr.Slider` as the additional input, so it will be passed as a float. See the [Gradio documentation](https://www.gradio.app/docs/gradio) for a complete list of components and their corresponding types. ### Input Hooks Outside of the gradio UI, you are free to update the inputs however you like by using the `set_input` method of the `Stream` object. A common pattern is to use a `POST` request to send the updated data. ```python from pydantic import BaseModel, Field from fastapi import FastAPI class InputData(BaseModel): webrtc_id: str conf_threshold: float = Field(ge=0, le=1) app = FastAPI() stream.mount(app) @app.post("/input_hook") async def _(data: InputData): stream.set_input(data.webrtc_id, data.conf_threshold) ``` The updated data will be passed to the handler on the **next** call. ## Additional Outputs You can return additional output from the handler by returning an instance of `AdditionalOutputs` from the handler. Let's modify our previous example to also return the number of detections in the frame. ```python from fastrtc import Stream, AdditionalOutputs import gradio as gr def detection(image, conf_threshold=0.3): processed_frame, n_objects = process_frame(image, conf_threshold) return processed_frame, AdditionalOutputs(n_objects) stream = Stream( handler=detection, modality="video", mode="send-receive", additional_inputs=[ gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3) ], additional_outputs=[gr.Number()], # (5) additional_outputs_handler=lambda component, n_objects: n_objects ) ``` We added a `gr.Number()` to the additional outputs and we provided an `additional_outputs_handler`. The `additional_outputs_handler` is **only** needed for the gradio UI. It is a function that takes the current state of the `component` and the instance of `AdditionalOutputs` and returns the updated state of the `component`. In our case, we want to update the `gr.Number()` with the number of detections. Tip Since the webRTC is very low latency, you probably don't want to return an additional output on each frame. ### Output Hooks Outside of the gradio UI, you are free to access the output data however you like by calling the `output_stream` method of the `Stream` object. A common pattern is to use a `GET` request to get a stream of the output data. ```python from fastapi.responses import StreamingResponse @app.get("/updates") async def stream_updates(webrtc_id: str): async def output_stream(): async for output in stream.output_stream(webrtc_id): # Output is the AdditionalOutputs instance # Be sure to serialize it however you would like yield f"data: {output.args[0]}\n\n" return StreamingResponse( output_stream(), media_type="text/event-stream" ) ``` ## Custom Routes and Frontend Integration You can add custom routes for serving your own frontend or handling additional functionality once you have mounted the stream on a FastAPI app. ```python from fastapi.responses import HTMLResponse from fastapi import FastAPI from fastrtc import Stream stream = Stream(...) app = FastAPI() stream.mount(app) # Serve a custom frontend @app.get("/") async def serve_frontend(): return HTMLResponse(content=open("index.html").read()) ``` ## Telephone Integration FastRTC provides built-in telephone support through the `fastphone()` method: ```python # Launch with a temporary phone number stream.fastphone( # Optional: If None, will use the default token in your machine or read from the HF_TOKEN environment variable token="your_hf_token", host="127.0.0.1", port=8000 ) ``` This will print out a phone number along with your temporary code you can use to connect to the stream. You are limited to **10 minutes** of calls per calendar month. Warning See this [section](../audio#telephone-integration) on making sure your stream handler is compatible for telephone usage. Tip If you don't have a HF token, you can get one [here](https://huggingface.co/settings/tokens). ## Concurrency 1. You can limit the number of concurrent connections by setting the `concurrency_limit` argument. 1. You can limit the amount of time (in seconds) a connection can stay open by setting the `time_limit` argument. ```python stream = Stream( handler=handler, concurrency_limit=10, time_limit=3600 ) ``` # Video Streaming ## Input/Output Streaming We already saw this example in the [Quickstart](../../#quickstart) and the [Core Concepts](../streams) section. Input/Output Streaming ```py from fastrtc import Stream import gradio as gr def detection(image, conf_threshold=0.3): # (1) processed_frame = process_frame(image, conf_threshold) return processed_frame # (2) stream = Stream( handler=detection, modality="video", mode="send-receive", # (3) additional_inputs=[ gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3) ], ) ``` 1. The webcam frame will be represented as a numpy array of shape (height, width, RGB). 1. The function must return a numpy array. It can take arbitrary values from other components. 1. Set the `modality="video"` and `mode="send-receive"` 1. The webcam frame will be represented as a numpy array of shape (height, width, RGB). 1. The function must return a numpy array. It can take arbitrary values from other components. 1. Set the `modality="video"` and `mode="send-receive"` ## Server-to-Client Only In this case, we stream from the server to the client so we will write a generator function that yields the next frame from the video (as a numpy array) and set the `mode="receive"` in the `WebRTC` component. Server-To-Client ```py from fastrtc import Stream import cv2 def generation(): url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4" cap = cv2.VideoCapture(url) iterating = True while iterating: iterating, frame = cap.read() yield frame stream = Stream( handler=generation, modality="video", mode="receive" ) ``` ## Skipping Frames If your event handler is not quite real-time yet, then the output feed will look very laggy. To fix this, you can set the `skip_frames` parameter to `True`. This will skip the frames that are received while the event handler is still running. Skipping Frames ```py import time import numpy as np from fastrtc import Stream, VideoStreamHandler def process_image(image): time.sleep( 0.2 ) # Simulating 200ms processing time per frame; input arrives faster (30 FPS). return np.flip(image, axis=0) stream = Stream( handler=VideoStreamHandler(process_image, skip_frames=True), modality="video", mode="send-receive", ) stream.ui.launch() ``` ## Setting the Output Frame Rate You can set the output frame rate by setting the `fps` parameter in the `VideoStreamHandler`. Setting the Output Frame Rate ```py def generation(): url = "https://github.com/user-attachments/assets/9636dc97-4fee-46bb-abb8-b92e69c08c71" cap = cv2.VideoCapture(url) iterating = True # FPS calculation variables frame_count = 0 start_time = time.time() fps = 0 while iterating: iterating, frame = cap.read() # Calculate and print FPS frame_count += 1 elapsed_time = time.time() - start_time if elapsed_time >= 1.0: # Update FPS every second fps = frame_count / elapsed_time yield frame, AdditionalOutputs(fps) frame_count = 0 start_time = time.time() else: yield frame stream = Stream( handler=VideoStreamHandler(generation, fps=60), modality="video", mode="receive", additional_outputs=[gr.Number(label="FPS")], additional_outputs_handler=lambda prev, cur: cur, ) stream.ui.launch() ``` # FastRTC Docs ## Connecting To connect to the server, you need to create a new RTCPeerConnection object and call the `setupWebRTC` function below. {% if mode in ["send-receive", "receive"] %} This code snippet assumes there is an html element with an id of `{{ modality }}_output_component_id` where the output will be displayed. It should be {{ "a `