mirror of
https://github.com/HumanAIGC-Engineering/gradio-webrtc.git
synced 2026-02-05 18:09:23 +08:00
Rebrand to FastRTC (#60)
* Add code * add code * add code * Rename messages * rename * add code * Add demo * docs + demos + bug fixes * add code * styles * user guide * Styles * Add code * misc docs updates * print nit * whisper + pr * url for images * whsiper update * Fix bugs * remove demo files * version number * Fix pypi readme * Fix * demos * Add llama code editor * Update llama code editor and object detection cookbook * Add more cookbook demos * add code * Fix links for PR deploys * add code * Fix the install * add tts * TTS docs * Typo * Pending bubbles for reply on pause * Stream redesign (#63) * better error handling * Websocket error handling * add code --------- Co-authored-by: Freddy Boulton <freddyboulton@hf-freddy.local> * remove docs from dist * Some docs typos * more typos * upload changes + docs * docs * better phone * update docs * add code * Make demos better * fix docs + websocket start_up * remove mention of FastAPI app * fastphone tweaks * add code * ReplyOnStopWord fixes * Fix cookbook * Fix pypi readme * add code * bump versions * sambanova cookbook * Fix tags * Llm voice chat * kyutai tag * Add error message to all index.html * STT module uses Moonshine * Not required from typing extensions * fix llm voice chat * Add vpn warning * demo fixes * demos * Add more ui args and gemini audio-video * update cookbook * version 9 --------- Co-authored-by: Freddy Boulton <freddyboulton@hf-freddy.local>
This commit is contained in:
203
docs/index.md
203
docs/index.md
@@ -1,30 +1,209 @@
|
||||
<h1 style='text-align: center; margin-bottom: 1rem; color: white;'> Gradio WebRTC ⚡️ </h1>
|
||||
<div style='text-align: center; margin-bottom: 1rem; display: flex; justify-content: center; align-items: center;'>
|
||||
<h1 style='color: white; margin: 0;'>FastRTC</h1>
|
||||
<img src="/fastrtc_logo.png"
|
||||
onerror="this.onerror=null; this.src='https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/fastrtc_logo.png';"
|
||||
alt="FastRTC Logo"
|
||||
style="height: 40px; margin-right: 10px;">
|
||||
</div>
|
||||
|
||||
<div style="display: flex; flex-direction: row; justify-content: center">
|
||||
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/gradio_webrtc">
|
||||
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/fastrtc">
|
||||
<a href="https://github.com/freddyaboulton/fastrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
</div>
|
||||
|
||||
<h3 style='text-align: center'>
|
||||
Stream video and audio in real time with Gradio using WebRTC.
|
||||
The Real-Time Communication Library for Python.
|
||||
</h3>
|
||||
|
||||
Turn any python function into a real-time audio and video stream over WebRTC or WebSockets.
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc
|
||||
pip install fastrtc
|
||||
```
|
||||
|
||||
to use built-in pause detection (see [ReplyOnPause](/user-guide/#reply-on-pause)), install the `vad` extra:
|
||||
to use built-in pause detection (see [ReplyOnPause](userguide/audio/#reply-on-pause)), and text to speech (see [Text To Speech](userguide/audio/#text-to-speech)), install the `vad` and `tts` extras:
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc[vad]
|
||||
pip install fastrtc[vad, tts]
|
||||
```
|
||||
|
||||
For stop word detection (see [ReplyOnStopWords](/user-guide/#reply-on-stopwords)), install the `stopword` extra:
|
||||
```bash
|
||||
pip install gradio_webrtc[stopword]
|
||||
```
|
||||
## Quickstart
|
||||
|
||||
Import the [Stream](userguide/streams) class and pass in a [handler](userguide/streams/#handlers).
|
||||
The `Stream` has three main methods:
|
||||
|
||||
- `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/).
|
||||
- `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.
|
||||
- `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system.
|
||||
|
||||
|
||||
=== "Echo Audio"
|
||||
|
||||
```python
|
||||
from fastrtc import Stream, ReplyOnPause
|
||||
import numpy as np
|
||||
|
||||
def echo(audio: tuple[int, np.ndarray]):
|
||||
# The function will be passed the audio until the user pauses
|
||||
# Implement any iterator that yields audio
|
||||
# See "LLM Voice Chat" for a more complete example
|
||||
yield audio
|
||||
|
||||
stream = Stream(
|
||||
handler=ReplyOnPause(detection),
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
)
|
||||
```
|
||||
|
||||
=== "LLM Voice Chat"
|
||||
|
||||
```py
|
||||
from fastrtc import (
|
||||
ReplyOnPause, AdditionalOutputs, Stream,
|
||||
audio_to_bytes, aggregate_bytes_to_16bit
|
||||
)
|
||||
import gradio as gr
|
||||
from groq import Groq
|
||||
import anthropic
|
||||
from elevenlabs import ElevenLabs
|
||||
|
||||
groq_client = Groq()
|
||||
claude_client = anthropic.Anthropic()
|
||||
tts_client = ElevenLabs()
|
||||
|
||||
|
||||
# See "Talk to Claude" in Cookbook for an example of how to keep
|
||||
# track of the chat history.
|
||||
def response(
|
||||
audio: tuple[int, np.ndarray],
|
||||
):
|
||||
prompt = groq_client.audio.transcriptions.create(
|
||||
file=("audio-file.mp3", audio_to_bytes(audio)),
|
||||
model="whisper-large-v3-turbo",
|
||||
response_format="verbose_json",
|
||||
).text
|
||||
response = claude_client.messages.create(
|
||||
model="claude-3-5-haiku-20241022",
|
||||
max_tokens=512,
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
)
|
||||
response_text = " ".join(
|
||||
block.text
|
||||
for block in response.content
|
||||
if getattr(block, "type", None) == "text"
|
||||
)
|
||||
iterator = tts_client.text_to_speech.convert_as_stream(
|
||||
text=response_text,
|
||||
voice_id="JBFqnCBsd6RMkjVDRZzb",
|
||||
model_id="eleven_multilingual_v2",
|
||||
output_format="pcm_24000"
|
||||
|
||||
)
|
||||
for chunk in aggregate_bytes_to_16bit(iterator):
|
||||
audio_array = np.frombuffer(chunk, dtype=np.int16).reshape(1, -1)
|
||||
yield (24000, audio_array)
|
||||
|
||||
stream = Stream(
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
handler=ReplyOnPause(response),
|
||||
)
|
||||
```
|
||||
|
||||
=== "Webcam Stream"
|
||||
|
||||
```python
|
||||
from fastrtc import Stream
|
||||
import numpy as np
|
||||
|
||||
|
||||
def flip_vertically(image):
|
||||
return np.flip(image, axis=0)
|
||||
|
||||
|
||||
stream = Stream(
|
||||
handler=flip_vertically,
|
||||
modality="video",
|
||||
mode="send-receive",
|
||||
)
|
||||
```
|
||||
|
||||
=== "Object Detection"
|
||||
|
||||
```python
|
||||
from fastrtc import Stream
|
||||
import gradio as gr
|
||||
import cv2
|
||||
from huggingface_hub import hf_hub_download
|
||||
from .inference import YOLOv10
|
||||
|
||||
model_file = hf_hub_download(
|
||||
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
|
||||
)
|
||||
|
||||
# git clone https://huggingface.co/spaces/fastrtc/object-detection
|
||||
# for YOLOv10 implementation
|
||||
model = YOLOv10(model_file)
|
||||
|
||||
def detection(image, conf_threshold=0.3):
|
||||
image = cv2.resize(image, (model.input_width, model.input_height))
|
||||
new_image = model.detect_objects(image, conf_threshold)
|
||||
return cv2.resize(new_image, (500, 500))
|
||||
|
||||
stream = Stream(
|
||||
handler=detection,
|
||||
modality="video",
|
||||
mode="send-receive",
|
||||
additional_inputs=[
|
||||
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
Run:
|
||||
=== "UI"
|
||||
|
||||
```py
|
||||
stream.ui.launch()
|
||||
```
|
||||
|
||||
=== "Telephone"
|
||||
|
||||
```py
|
||||
stream.fastphone()
|
||||
```
|
||||
|
||||
=== "FastAPI"
|
||||
|
||||
```py
|
||||
app = FastAPI()
|
||||
stream.mount(app)
|
||||
|
||||
# Optional: Add routes
|
||||
@app.get("/")
|
||||
async def _():
|
||||
return HTMLResponse(content=open("index.html").read())
|
||||
|
||||
# uvicorn app:app --host 0.0.0.0 --port 8000
|
||||
```
|
||||
|
||||
Learn more about the [Stream](userguide/streams) in the user guide.
|
||||
## Key Features
|
||||
|
||||
:speaking_head:{ .lg } Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user.
|
||||
|
||||
:material-laptop:{ .lg } Automatic UI - Use the `.ui.launch()` method to launch the webRTC-enabled built-in Gradio UI.
|
||||
|
||||
:material-lightning-bolt:{ .lg } Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!
|
||||
|
||||
:simple-webstorm:{ .lg } Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!
|
||||
|
||||
:telephone:{ .lg } Automatic Telephone Support - Use the `fastphone()` method of the stream to launch the application and get a free temporary phone number!
|
||||
|
||||
:robot:{ .lg } Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example on how to serve a custom JS frontend.
|
||||
|
||||
|
||||
## Examples
|
||||
See the [cookbook](/cookbook)
|
||||
See the [cookbook](/cookbook).
|
||||
Reference in New Issue
Block a user