mirror of
https://github.com/HumanAIGC-Engineering/gradio-webrtc.git
synced 2026-02-05 01:49:23 +08:00
fix readme (#38)
This commit is contained in:
493
README.md
493
README.md
@@ -2,7 +2,8 @@
|
||||
|
||||
<div style="display: flex; flex-direction: row; justify-content: center">
|
||||
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/gradio_webrtc">
|
||||
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" style="display: block; padding-right: 5px; height: 20px;" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
<a href="https://freddyaboulton.github.io/gradio-webrtc/" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/Docs-ffcf40"></a>
|
||||
</div>
|
||||
|
||||
<h3 style='text-align: center'>
|
||||
@@ -22,180 +23,198 @@ pip install gradio_webrtc[vad]
|
||||
```
|
||||
|
||||
For stop word detection (see [ReplyOnStopWords](https://freddyaboulton.github.io/gradio-webrtc//user-guide/#reply-on-stopwords)), install the `stopword` extra:
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc[stopword]
|
||||
```
|
||||
```
|
||||
|
||||
## Examples:
|
||||
1. [Object Detection from Webcam with YOLOv10](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n) 📷
|
||||
2. [Streaming Object Detection from Video with RT-DETR](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc) 🎥
|
||||
3. [Text-to-Speech](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc) 🗣️
|
||||
4. [Conversational AI](https://huggingface.co/spaces/freddyaboulton/omni-mini-webrtc) 🤖🗣️
|
||||
## Docs
|
||||
|
||||
https://freddyaboulton.github.io/gradio-webrtc/
|
||||
|
||||
## Examples
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Audio Input/Output with mini-omni2</h3>
|
||||
<p>Build a GPT-4o like experience with mini-omni2, an audio-native LLM.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/58c06523-fc38-4f5f-a4ba-a02a28e7fa9e" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Talk to Claude</h3>
|
||||
<p>Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/650bc492-798e-4995-8cef-159e1cfc2185" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-claude">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-claude/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Kyutai Moshi</h3>
|
||||
<p>Kyutai's moshi is a novel speech-to-speech model for modeling human conversations.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/becc7a13-9e89-4a19-9df2-5fb1467a0137" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Hello Llama: Stop Word Detection</h3>
|
||||
<p>A code editor built with Llama 3.3 70b that is triggered by the phrase "Hello Llama". Build a Siri-like coding assistant in 100 lines of code!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/3e10cb15-ff1b-4b17-b141-ff0ad852e613" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🤖 Llama Code Editor</h3>
|
||||
<p>Create and edit HTML pages with just your voice! Powered by SambaNova systems.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/a09647f1-33e1-4154-a5a3-ffefda8a736a" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/llama-code-editor">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/llama-code-editor/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Talk to Ultravox</h3>
|
||||
<p>Talk to Fixie.AI's audio-native Ultravox LLM with the transformers library.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/e6e62482-518c-4021-9047-9da14cd82be1" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Talk to Llama 3.2 3b</h3>
|
||||
<p>Use the Lepton API to make Llama 3.2 talk back to you!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/3ee37a6b-0892-45f5-b801-73188fdfad9a" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🤖 Talk to Qwen2-Audio</h3>
|
||||
<p>Qwen2-Audio is a SOTA audio-to-text LLM developed by Alibaba.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/c821ad86-44cc-4d0c-8dc4-8c02ad1e5dc8" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>📷 Yolov10 Object Detection</h3>
|
||||
<p>Run the Yolov10 model on a user webcam stream in real time!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/c90d8c9d-d2d5-462e-9e9b-af969f2ea73c" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>📷 Video Object Detection with RT-DETR</h3>
|
||||
<p>Upload a video and stream out frames with detected objects (powered by RT-DETR) model.</p>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🔊 Text-to-Speech with Parler</h3>
|
||||
<p>Stream out audio generated by Parler TTS!</p>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## Usage
|
||||
|
||||
The WebRTC component supports the following three use cases:
|
||||
1. [Streaming video from the user webcam to the server and back](#h-streaming-video-from-the-user-webcam-to-the-server-and-back)
|
||||
2. [Streaming Video from the server to the client](#h-streaming-video-from-the-server-to-the-client)
|
||||
3. [Streaming Audio from the server to the client](#h-streaming-audio-from-the-server-to-the-client)
|
||||
4. [Streaming Audio from the client to the server and back (conversational AI)](#h-conversational-ai)
|
||||
This is an shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
|
||||
|
||||
To get started with WebRTC streams, all that's needed is to import the `WebRTC` component from this package and implement its `stream` event.
|
||||
|
||||
## Streaming Video from the User Webcam to the Server and Back
|
||||
### Reply on Pause
|
||||
|
||||
```python
|
||||
Typically, you want to run an AI model that generates audio when the user has stopped speaking. This can be done by wrapping a python generator with the `ReplyOnPause` class
|
||||
and passing it to the `stream` event of the `WebRTC` component.
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
from gradio_webrtc import WebRTC, ReplyOnPause
|
||||
|
||||
|
||||
def detection(image, conf_threshold=0.3):
|
||||
... your detection code here ...
|
||||
def response(audio: tuple[int, np.ndarray]): # (1)
|
||||
"""This function must yield audio frames"""
|
||||
...
|
||||
for numpy_array in generated_audio:
|
||||
yield (sampling_rate, numpy_array, "mono") # (2)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
image = WebRTC(label="Stream", mode="send-receive", modality="video")
|
||||
conf_threshold = gr.Slider(
|
||||
label="Confidence Threshold",
|
||||
minimum=0.0,
|
||||
maximum=1.0,
|
||||
step=0.05,
|
||||
value=0.30,
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Chat (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
image.stream(
|
||||
fn=detection,
|
||||
inputs=[image, conf_threshold],
|
||||
outputs=[image], time_limit=10
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
* Set the `mode` parameter to `send-receive` and `modality` to "video".
|
||||
* The `stream` event's `fn` parameter is a function that receives the next frame from the webcam
|
||||
as a **numpy array** and returns the processed frame also as a **numpy array**.
|
||||
* Numpy arrays are in (height, width, 3) format where the color channels are in RGB format.
|
||||
* The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
|
||||
* The `time_limit` parameter is the maximum time in seconds the video stream will run. If the time limit is reached, the video stream will stop.
|
||||
|
||||
## Streaming Video from the server to the client
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
import cv2
|
||||
|
||||
def generation():
|
||||
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
|
||||
cap = cv2.VideoCapture(url)
|
||||
iterating = True
|
||||
while iterating:
|
||||
iterating, frame = cap.read()
|
||||
yield frame
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
output_video = WebRTC(label="Video Stream", mode="receive", modality="video")
|
||||
button = gr.Button("Start", variant="primary")
|
||||
output_video.stream(
|
||||
fn=generation, inputs=None, outputs=[output_video],
|
||||
trigger=button.click
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
* Set the "mode" parameter to "receive" and "modality" to "video".
|
||||
* The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
|
||||
* The only output allowed is the WebRTC component.
|
||||
* The `trigger` parameter the gradio event that will trigger the webrtc connection. In this case, the button click event.
|
||||
|
||||
## Streaming Audio from the Server to the Client
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from pydub import AudioSegment
|
||||
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file("/Users/freddy/sources/gradio/demo/audio_debugger/cantina.wav")
|
||||
yield (segment.frame_rate, np.array(segment.get_array_of_samples()).reshape(1, -1))
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
audio = WebRTC(label="Stream", mode="receive", modality="audio")
|
||||
num_steps = gr.Slider(
|
||||
label="Number of Steps",
|
||||
minimum=1,
|
||||
maximum=10,
|
||||
step=1,
|
||||
value=5,
|
||||
)
|
||||
button = gr.Button("Generate")
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio],
|
||||
trigger=button.click
|
||||
)
|
||||
```
|
||||
|
||||
* Set the "mode" parameter to "receive" and "modality" to "audio".
|
||||
* The `stream` event's `fn` parameter is a generator function that yields the next audio segment as a tuple of (frame_rate, audio_samples).
|
||||
* The numpy array should be of shape (1, num_samples).
|
||||
* The `outputs` parameter should be a list with the WebRTC component as the only element.
|
||||
|
||||
## Conversational AI
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from gradio_webrtc import WebRTC, StreamHandler
|
||||
from queue import Queue
|
||||
import time
|
||||
|
||||
|
||||
class EchoHandler(StreamHandler):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.queue = Queue()
|
||||
|
||||
def receive(self, frame: tuple[int, np.ndarray] | np.ndarray) -> None:
|
||||
self.queue.put(frame)
|
||||
|
||||
def emit(self) -> None:
|
||||
return self.queue.get()
|
||||
|
||||
def copy(self) -> StreamHandler:
|
||||
return EchoHandler()
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=None,
|
||||
mode="send-receive",
|
||||
mode="send-receive", # (3)
|
||||
modality="audio",
|
||||
)
|
||||
audio.stream(fn=ReplyOnPause(response),
|
||||
inputs=[audio], outputs=[audio], # (4)
|
||||
time_limit=60) # (5)
|
||||
|
||||
audio.stream(fn=EchoHandler(), inputs=[audio], outputs=[audio], time_limit=15)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
* Instead of passing a function to the `stream` event's `fn` parameter, pass a `StreamHandler` implementation. The `StreamHandler` above simply echoes the audio back to the client.
|
||||
* The `StreamHandler` class has two methods: `receive` and `emit` and `copy`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client. The `copy` method is called at the beginning of the stream to ensure each user has a unique stream handler.
|
||||
* An audio frame is represented as a tuple of (frame_rate, audio_samples) where `audio_samples` is a numpy array of shape (num_channels, num_samples).
|
||||
* You can also specify the audio layout ("mono" or "stereo") in the emit method by retuning it as the third element of the tuple. If not specified, the default is "mono".
|
||||
* The `time_limit` parameter is the maximum time in seconds the conversation will run. If the time limit is reached, the audio stream will stop.
|
||||
* The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return `None`.
|
||||
1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components.
|
||||
|
||||
An easy way to get started with Conversational AI is to use the `ReplyOnPause` stream handler. This will automatically run your function when the speaker has stopped speaking. In order to use `ReplyOnPause`, the `[vad]` extra dependencies must be installed.
|
||||
2. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio array). Each numpy audio array must have a shape of (1, num_samples).
|
||||
|
||||
```python
|
||||
3. The `mode` and `modality` arguments must be set to `"send-receive"` and `"audio"`.
|
||||
|
||||
4. The `WebRTC` component must be the first input and output component.
|
||||
|
||||
5. Set a `time_limit` to control how long a conversation will last. If the `concurrency_count` is 1 (default), only one conversation will be handled at a time.
|
||||
|
||||
|
||||
### Reply On Stopwords
|
||||
|
||||
You can configure your AI model to run whenever a set of "stop words" are detected, like "Hey Siri" or "computer", with the `ReplyOnStopWords` class.
|
||||
|
||||
The API is similar to `ReplyOnPause` with the addition of a `stop_words` parameter.
|
||||
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC, ReplyOnPause
|
||||
|
||||
@@ -217,17 +236,181 @@ with gr.Blocks() as demo:
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=rtc_configuration,
|
||||
mode="send-receive",
|
||||
mode="send",
|
||||
modality="audio",
|
||||
)
|
||||
audio.stream(fn=ReplyOnPause(response), inputs=[audio], outputs=[audio], time_limit=60)
|
||||
webrtc.stream(ReplyOnStopWords(generate,
|
||||
input_sample_rate=16000,
|
||||
stop_words=["computer"]), # (1)
|
||||
inputs=[webrtc, history, code],
|
||||
outputs=[webrtc], time_limit=90,
|
||||
concurrency_limit=10)
|
||||
|
||||
|
||||
demo.launch(ssr_mode=False)
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
|
||||
|
||||
|
||||
### Audio Server-To-Clien
|
||||
|
||||
To stream only from the server to the client, implement a python generator and pass it to the component's `stream` event. The stream event must also specify a `trigger` corresponding to a UI interaction that starts the stream. In this case, it's a button click.
|
||||
|
||||
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
from pydub import AudioSegment
|
||||
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file("audio_file.wav")
|
||||
array = np.array(segment.get_array_of_samples()).reshape(1, -1)
|
||||
yield (segment.frame_rate, array)
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
audio = WebRTC(label="Stream", mode="receive", # (1)
|
||||
modality="audio")
|
||||
num_steps = gr.Slider(label="Number of Steps", minimum=1,
|
||||
maximum=10, step=1, value=5)
|
||||
button = gr.Button("Generate")
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio],
|
||||
trigger=button.click # (2)
|
||||
)
|
||||
```
|
||||
|
||||
1. Set `mode="receive"` to only receive audio from the server.
|
||||
2. The `stream` event must take a `trigger` that corresponds to the gradio event that starts the stream. In this case, it's the button click.
|
||||
|
||||
|
||||
### Video Input/Output Streaming
|
||||
Set up a video Input/Output stream to continuosly receive webcam frames from the user and run an arbitrary python function to return a modified frame.
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
|
||||
|
||||
def detection(image, conf_threshold=0.3): # (1)
|
||||
... your detection code here ...
|
||||
return modified_frame # (2)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
image = WebRTC(label="Stream", mode="send-receive", modality="video") # (3)
|
||||
conf_threshold = gr.Slider(
|
||||
label="Confidence Threshold",
|
||||
minimum=0.0,
|
||||
maximum=1.0,
|
||||
step=0.05,
|
||||
value=0.30,
|
||||
)
|
||||
image.stream(
|
||||
fn=detection,
|
||||
inputs=[image, conf_threshold], # (4)
|
||||
outputs=[image], time_limit=10
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
|
||||
2. The function must return a numpy array. It can take arbitrary values from other components.
|
||||
3. Set the `modality="video"` and `mode="send-receive"`
|
||||
4. The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
|
||||
|
||||
### Server-to-Client Only
|
||||
|
||||
Set up a server-to-client stream to stream video from an arbitrary user interaction.
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
import cv2
|
||||
|
||||
def generation():
|
||||
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
|
||||
cap = cv2.VideoCapture(url)
|
||||
iterating = True
|
||||
while iterating:
|
||||
iterating, frame = cap.read()
|
||||
yield frame # (1)
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
output_video = WebRTC(label="Video Stream", mode="receive", # (2)
|
||||
modality="video")
|
||||
button = gr.Button("Start", variant="primary")
|
||||
output_video.stream(
|
||||
fn=generation, inputs=None, outputs=[output_video],
|
||||
trigger=button.click # (3)
|
||||
)
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
|
||||
2. Set `mode="receive"` to only receive audio from the server.
|
||||
3. The `trigger` parameter the gradio event that will trigger the stream. In this case, the button click event.
|
||||
|
||||
|
||||
### Additional Outputs
|
||||
|
||||
In order to modify other components from within the WebRTC stream, you must yield an instance of `AdditionalOutputs` and add an `on_additional_outputs` event to the `WebRTC` component.
|
||||
|
||||
This is common for displaying a multimodal text/audio conversation in a Chatbot UI.
|
||||
|
||||
|
||||
|
||||
``` py title="Additional Outputs"
|
||||
from gradio_webrtc import AdditionalOutputs, WebRTC
|
||||
|
||||
def transcribe(audio: tuple[int, np.ndarray],
|
||||
transformers_convo: list[dict],
|
||||
gradio_convo: list[dict]):
|
||||
response = model.generate(**inputs, max_length=256)
|
||||
transformers_convo.append({"role": "assistant", "content": response})
|
||||
gradio_convo.append({"role": "assistant", "content": response})
|
||||
yield AdditionalOutputs(transformers_convo, gradio_convo) # (1)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Talk to Qwen2Audio (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
transformers_convo = gr.State(value=[])
|
||||
with gr.Row():
|
||||
with gr.Column():
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
mode="send", # (2)
|
||||
modality="audio",
|
||||
)
|
||||
with gr.Column():
|
||||
transcript = gr.Chatbot(label="transcript", type="messages")
|
||||
|
||||
audio.stream(ReplyOnPause(transcribe),
|
||||
inputs=[audio, transformers_convo, transcript],
|
||||
outputs=[audio], time_limit=90)
|
||||
audio.on_additional_outputs(lambda s,a: (s,a), # (3)
|
||||
outputs=[transformers_convo, transcript],
|
||||
queue=False, show_progress="hidden")
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. Pass your data to `AdditionalOutputs` and yield it.
|
||||
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
|
||||
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
|
||||
=== "Notes"
|
||||
1. Pass your data to `AdditionalOutputs` and yield it.
|
||||
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
|
||||
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
|
||||
|
||||
|
||||
## Deployment
|
||||
|
||||
@@ -295,11 +295,10 @@ This is common for displaying a multimodal text/audio conversation in a Chatbot
|
||||
def transcribe(audio: tuple[int, np.ndarray],
|
||||
transformers_convo: list[dict],
|
||||
gradio_convo: list[dict]):
|
||||
... generate text response ...
|
||||
response = model.generate(**inputs, max_length=256)
|
||||
transformers_convo.append({"role": "assistant", "content": response})
|
||||
gradio_convo.append({"role": "assistant", "content": response})
|
||||
yield AdditionalOutputs(transformers_convo, gradio_convo) # (1)
|
||||
response = model.generate(**inputs, max_length=256)
|
||||
transformers_convo.append({"role": "assistant", "content": response})
|
||||
gradio_convo.append({"role": "assistant", "content": response})
|
||||
yield AdditionalOutputs(transformers_convo, gradio_convo) # (1)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
|
||||
@@ -8,7 +8,7 @@ build-backend = "hatchling.build"
|
||||
|
||||
[project]
|
||||
name = "gradio_webrtc"
|
||||
version = "0.0.21"
|
||||
version = "0.0.22rc1"
|
||||
description = "Stream images in realtime with webrtc"
|
||||
readme = "README.md"
|
||||
license = "apache-2.0"
|
||||
|
||||
Reference in New Issue
Block a user