FastRTC

FastRTC Logo
Static Badge Static Badge

The Real-Time Communication Library for Python.

Turn any python function into a real-time audio and video stream over WebRTC or WebSockets. ## Installation ```bash pip install fastrtc ``` to use built-in pause detection (see [ReplyOnPause](userguide/audio/#reply-on-pause)), and text to speech (see [Text To Speech](userguide/audio/#text-to-speech)), install the `vad` and `tts` extras: ```bash pip install fastrtc[vad, tts] ``` ## Quickstart Import the [Stream](userguide/streams) class and pass in a [handler](userguide/streams/#handlers). The `Stream` has three main methods: - `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/). - `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required. - `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system. === "Echo Audio" ```python from fastrtc import Stream, ReplyOnPause import numpy as np def echo(audio: tuple[int, np.ndarray]): # The function will be passed the audio until the user pauses # Implement any iterator that yields audio # See "LLM Voice Chat" for a more complete example yield audio stream = Stream( handler=ReplyOnPause(detection), modality="audio", mode="send-receive", ) ``` === "LLM Voice Chat" ```py from fastrtc import ( ReplyOnPause, AdditionalOutputs, Stream, audio_to_bytes, aggregate_bytes_to_16bit ) import gradio as gr from groq import Groq import anthropic from elevenlabs import ElevenLabs groq_client = Groq() claude_client = anthropic.Anthropic() tts_client = ElevenLabs() # See "Talk to Claude" in Cookbook for an example of how to keep # track of the chat history. def response( audio: tuple[int, np.ndarray], ): prompt = groq_client.audio.transcriptions.create( file=("audio-file.mp3", audio_to_bytes(audio)), model="whisper-large-v3-turbo", response_format="verbose_json", ).text response = claude_client.messages.create( model="claude-3-5-haiku-20241022", max_tokens=512, messages=[{"role": "user", "content": prompt}], ) response_text = " ".join( block.text for block in response.content if getattr(block, "type", None) == "text" ) iterator = tts_client.text_to_speech.convert_as_stream( text=response_text, voice_id="JBFqnCBsd6RMkjVDRZzb", model_id="eleven_multilingual_v2", output_format="pcm_24000" ) for chunk in aggregate_bytes_to_16bit(iterator): audio_array = np.frombuffer(chunk, dtype=np.int16).reshape(1, -1) yield (24000, audio_array) stream = Stream( modality="audio", mode="send-receive", handler=ReplyOnPause(response), ) ``` === "Webcam Stream" ```python from fastrtc import Stream import numpy as np def flip_vertically(image): return np.flip(image, axis=0) stream = Stream( handler=flip_vertically, modality="video", mode="send-receive", ) ``` === "Object Detection" ```python from fastrtc import Stream import gradio as gr import cv2 from huggingface_hub import hf_hub_download from .inference import YOLOv10 model_file = hf_hub_download( repo_id="onnx-community/yolov10n", filename="onnx/model.onnx" ) # git clone https://huggingface.co/spaces/fastrtc/object-detection # for YOLOv10 implementation model = YOLOv10(model_file) def detection(image, conf_threshold=0.3): image = cv2.resize(image, (model.input_width, model.input_height)) new_image = model.detect_objects(image, conf_threshold) return cv2.resize(new_image, (500, 500)) stream = Stream( handler=detection, modality="video", mode="send-receive", additional_inputs=[ gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3) ] ) ``` Run: === "UI" ```py stream.ui.launch() ``` === "Telephone" ```py stream.fastphone() ``` === "FastAPI" ```py app = FastAPI() stream.mount(app) # Optional: Add routes @app.get("/") async def _(): return HTMLResponse(content=open("index.html").read()) # uvicorn app:app --host 0.0.0.0 --port 8000 ``` Learn more about the [Stream](userguide/streams) in the user guide. ## Key Features :speaking_head:{ .lg } Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user. :material-laptop:{ .lg } Automatic UI - Use the `.ui.launch()` method to launch the webRTC-enabled built-in Gradio UI. :material-lightning-bolt:{ .lg } Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend! :simple-webstorm:{ .lg } Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend! :telephone:{ .lg } Automatic Telephone Support - Use the `fastphone()` method of the stream to launch the application and get a free temporary phone number! :robot:{ .lg } Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example on how to serve a custom JS frontend. ## Examples See the [cookbook](/cookbook).