mirror of
https://github.com/HumanAIGC-Engineering/gradio-webrtc.git
synced 2026-02-05 01:49:23 +08:00
t :# 请为您的变更输入提交说明。以 '#' 开始的行将被忽略,而一个空的提交
This commit is contained in:
160
docs/advanced-configuration.md
Normal file
160
docs/advanced-configuration.md
Normal file
@@ -0,0 +1,160 @@
|
||||
## Track Constraints
|
||||
|
||||
You can specify the `track_constraints` parameter to control how the data is streamed to the server. The full documentation on track constraints is [here](https://developer.mozilla.org/en-US/docs/Web/API/MediaTrackConstraints#constraints).
|
||||
|
||||
For example, you can control the size of the frames captured from the webcam like so:
|
||||
|
||||
```python
|
||||
track_constraints = {
|
||||
"width": {"exact": 500},
|
||||
"height": {"exact": 500},
|
||||
"frameRate": {"ideal": 30},
|
||||
}
|
||||
webrtc = WebRTC(track_constraints=track_constraints,
|
||||
modality="video",
|
||||
mode="send-receive")
|
||||
```
|
||||
|
||||
|
||||
!!! warning
|
||||
|
||||
WebRTC may not enforce your constaints. For example, it may rescale your video
|
||||
(while keeping the same resolution) in order to maintain the desired (or reach a better) frame rate. If you
|
||||
really want to enforce height, width and resolution constraints, use the `rtp_params` parameter as set `"degradationPreference": "maintain-resolution"`.
|
||||
|
||||
```python
|
||||
image = WebRTC(
|
||||
label="Stream",
|
||||
mode="send",
|
||||
track_constraints=track_constraints,
|
||||
rtp_params={"degradationPreference": "maintain-resolution"}
|
||||
)
|
||||
```
|
||||
|
||||
## The RTC Configuration
|
||||
|
||||
You can configure how the connection is created on the client by passing an `rtc_configuration` parameter to the `WebRTC` component constructor.
|
||||
See the list of available arguments [here](https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/RTCPeerConnection#configuration).
|
||||
|
||||
When deploying on a remote server, an `rtc_configuration` parameter must be passed in. See [Deployment](/deployment).
|
||||
|
||||
## Reply on Pause Voice-Activity-Detection
|
||||
|
||||
The `ReplyOnPause` class runs a Voice Activity Detection (VAD) algorithm to determine when a user has stopped speaking.
|
||||
|
||||
1. First, the algorithm determines when the user has started speaking.
|
||||
2. Then it groups the audio into chunks.
|
||||
3. On each chunk, we determine the length of human speech in the chunk.
|
||||
4. If the length of human speech is below a threshold, a pause is detected.
|
||||
|
||||
The following parameters control this argument:
|
||||
|
||||
```python
|
||||
from gradio_webrtc import AlgoOptions, ReplyOnPause, WebRTC
|
||||
|
||||
options = AlgoOptions(audio_chunk_duration=0.6, # (1)
|
||||
started_talking_threshold=0.2, # (2)
|
||||
speech_threshold=0.1, # (3)
|
||||
)
|
||||
|
||||
with gr.Blocks as demo:
|
||||
audio = WebRTC(...)
|
||||
audio.stream(ReplyOnPause(..., algo_options=algo_options)
|
||||
)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. This is the length (in seconds) of audio chunks.
|
||||
2. If the chunk has more than 0.2 seconds of speech, the user started talking.
|
||||
3. If, after the user started speaking, there is a chunk with less than 0.1 seconds of speech, the user stopped speaking.
|
||||
|
||||
|
||||
## Stream Handler Input Audio
|
||||
|
||||
You can configure the sampling rate of the audio passed to the `ReplyOnPause` or `StreamHandler` instance with the `input_sampling_rate` parameter. The current default is `48000`
|
||||
|
||||
```python
|
||||
from gradio_webrtc import ReplyOnPause, WebRTC
|
||||
|
||||
with gr.Blocks as demo:
|
||||
audio = WebRTC(...)
|
||||
audio.stream(ReplyOnPause(..., input_sampling_rate=24000)
|
||||
)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
|
||||
## Stream Handler Output Audio
|
||||
|
||||
You can configure the output audio chunk size of `ReplyOnPause` (and any `StreamHandler`)
|
||||
with the `output_sample_rate` and `output_frame_size` parameters.
|
||||
|
||||
The following code (which uses the default values of these parameters), states that each output chunk will be a frame of 960 samples at a frame rate of `24,000` hz. So it will correspond to `0.04` seconds.
|
||||
|
||||
```python
|
||||
from gradio_webrtc import ReplyOnPause, WebRTC
|
||||
|
||||
with gr.Blocks as demo:
|
||||
audio = WebRTC(...)
|
||||
audio.stream(ReplyOnPause(..., output_sample_rate=24000, output_frame_size=960)
|
||||
)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
!!! tip
|
||||
|
||||
In general it is best to leave these settings untouched. In some cases,
|
||||
lowering the output_frame_size can yield smoother audio playback.
|
||||
|
||||
|
||||
## Audio Icon
|
||||
|
||||
You can display an icon of your choice instead of the default wave animation for audio streaming.
|
||||
Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or recevied (depending on the `mode` parameter) a pulse animation will emanate from the button.
|
||||
|
||||
You can control the button color and pulse color with `icon_button_color` and `pulse_color` parameters. They can take any valid css color.
|
||||
|
||||
=== "Code"
|
||||
``` python
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=rtc_configuration,
|
||||
mode="receive",
|
||||
modality="audio",
|
||||
icon="phone-solid.svg",
|
||||
)
|
||||
```
|
||||
<img src="https://github.com/user-attachments/assets/fd2e70a3-1698-4805-a8cb-9b7b3bcf2198">
|
||||
=== "Code Custom colors"
|
||||
``` python
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=rtc_configuration,
|
||||
mode="receive",
|
||||
modality="audio",
|
||||
icon="phone-solid.svg",
|
||||
icon_button_color="black",
|
||||
pulse_color="black",
|
||||
)
|
||||
```
|
||||
<img src="https://github.com/user-attachments/assets/39e9bb0b-53fb-448e-be44-d37f6785b4b6">
|
||||
|
||||
|
||||
## Changing the Button Text
|
||||
|
||||
You can supply a `button_labels` dictionary to change the text displayed in the `Start`, `Stop` and `Waiting` buttons that are displayed in the UI.
|
||||
The keys must be `"start"`, `"stop"`, and `"waiting"`.
|
||||
|
||||
``` python
|
||||
webrtc = WebRTC(
|
||||
label="Video Chat",
|
||||
modality="audio-video",
|
||||
mode="send-receive",
|
||||
button_labels={"start": "Start Talking to Gemini"}
|
||||
)
|
||||
```
|
||||
|
||||
<img src="https://github.com/user-attachments/assets/04be0b95-189c-4b4b-b8cc-1eb598529dd3" />
|
||||
1
docs/bolt.svg
Normal file
1
docs/bolt.svg
Normal file
@@ -0,0 +1 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" height="24px" viewBox="0 -960 960 960" width="24px" fill="#e8eaed"><path d="m422-232 207-248H469l29-227-185 267h139l-30 208ZM320-80l40-280H160l360-520h80l-40 320h240L400-80h-80Zm151-390Z"/></svg>
|
||||
|
After Width: | Height: | Size: 235 B |
172
docs/cookbook.md
Normal file
172
docs/cookbook.md
Normal file
@@ -0,0 +1,172 @@
|
||||
<div class="grid cards" markdown>
|
||||
|
||||
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __Gemini Audio Video Chat__
|
||||
|
||||
---
|
||||
|
||||
Stream BOTH your webcam video and audio feeds to Google Gemini. You can also upload images to augment your conversation!
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/9636dc97-4fee-46bb-abb8-b92e69c08c71" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat/blob/main/app.py)
|
||||
|
||||
- :speaking_head:{ .lg .middle } __Google Gemini Real Time Voice API__
|
||||
|
||||
---
|
||||
|
||||
Talk to Gemini in real time using Google's voice API.
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/da8c8a2a-5d99-4ac7-8927-0f7812e4146f" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/gemini-voice)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/gemini-voice/blob/main/app.py)
|
||||
|
||||
- :speaking_head:{ .lg .middle } __OpenAI Real Time Voice API__
|
||||
|
||||
---
|
||||
|
||||
Talk to ChatGPT in real time using OpenAI's voice API.
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/41a63376-43ec-496a-9b31-4f067d3903d6" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/openai-realtime-voice)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/openai-realtime-voice/blob/main/app.py)
|
||||
|
||||
- :speaking_head:{ .lg .middle } __Hello Llama: Stop Word Detection__
|
||||
|
||||
---
|
||||
|
||||
A code editor built with Llama 3.3 70b that is triggered by the phrase "Hello Llama".
|
||||
Build a Siri-like coding assistant in 100 lines of code!
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/3e10cb15-ff1b-4b17-b141-ff0ad852e613" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](hhttps://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor/blob/main/app.py)
|
||||
|
||||
- :robot:{ .lg .middle } __Llama Code Editor__
|
||||
|
||||
---
|
||||
|
||||
Create and edit HTML pages with just your voice! Powered by SambaNova systems.
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/a09647f1-33e1-4154-a5a3-ffefda8a736a" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/llama-code-editor)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/llama-code-editor/blob/main/app.py)
|
||||
|
||||
- :speaking_head:{ .lg .middle } __Audio Input/Output with mini-omni2__
|
||||
|
||||
---
|
||||
|
||||
Build a GPT-4o like experience with mini-omni2, an audio-native LLM.
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/58c06523-fc38-4f5f-a4ba-a02a28e7fa9e" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc/blob/main/app.py)
|
||||
|
||||
- :speaking_head:{ .lg .middle } __Talk to Claude__
|
||||
|
||||
---
|
||||
|
||||
Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/650bc492-798e-4995-8cef-159e1cfc2185" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-claude)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-claude/blob/main/app.py)
|
||||
|
||||
- :speaking_head:{ .lg .middle } __Kyutai Moshi__
|
||||
|
||||
---
|
||||
|
||||
Kyutai's moshi is a novel speech-to-speech model for modeling human conversations.
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/becc7a13-9e89-4a19-9df2-5fb1467a0137" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-moshi)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-moshi/blob/main/app.py)
|
||||
|
||||
- :speaking_head:{ .lg .middle } __Talk to Ultravox__
|
||||
|
||||
---
|
||||
|
||||
Talk to Fixie.AI's audio-native Ultravox LLM with the transformers library.
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/e6e62482-518c-4021-9047-9da14cd82be1" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox/blob/main/app.py)
|
||||
|
||||
|
||||
- :speaking_head:{ .lg .middle } __Talk to Llama 3.2 3b__
|
||||
|
||||
---
|
||||
|
||||
Use the Lepton API to make Llama 3.2 talk back to you!
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/3ee37a6b-0892-45f5-b801-73188fdfad9a" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc/blob/main/app.py)
|
||||
|
||||
|
||||
- :robot:{ .lg .middle } __Talk to Qwen2-Audio__
|
||||
|
||||
---
|
||||
|
||||
Qwen2-Audio is a SOTA audio-to-text LLM developed by Alibaba.
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/c821ad86-44cc-4d0c-8dc4-8c02ad1e5dc8" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc/blob/main/app.py)
|
||||
|
||||
|
||||
- :camera:{ .lg .middle } __Yolov10 Object Detection__
|
||||
|
||||
---
|
||||
|
||||
Run the Yolov10 model on a user webcam stream in real time!
|
||||
|
||||
<video width=98% src="https://github.com/user-attachments/assets/c90d8c9d-d2d5-462e-9e9b-af969f2ea73c" controls style="text-align: center"></video>
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/app.py)
|
||||
|
||||
- :camera:{ .lg .middle } __Video Object Detection with RT-DETR__
|
||||
|
||||
---
|
||||
|
||||
Upload a video and stream out frames with detected objects (powered by RT-DETR) model.
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc/blob/main/app.py)
|
||||
|
||||
- :speaker:{ .lg .middle } __Text-to-Speech with Parler__
|
||||
|
||||
---
|
||||
|
||||
Stream out audio generated by Parler TTS!
|
||||
|
||||
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc)
|
||||
|
||||
[:octicons-code-16: Code](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc/blob/main/app.py)
|
||||
|
||||
|
||||
</div>
|
||||
165
docs/deployment.md
Normal file
165
docs/deployment.md
Normal file
@@ -0,0 +1,165 @@
|
||||
When deploying in a cloud environment (like Hugging Face Spaces, EC2, etc), you need to set up a TURN server to relay the WebRTC traffic.
|
||||
|
||||
## Community Server
|
||||
|
||||
Hugging Face graciously provides a TURN server for the community.
|
||||
In order to use it, you need to first create a Hugging Face account by going to the [huggingface.co](https://huggingface.co/).
|
||||
|
||||
Then navigate to this [space](https://huggingface.co/spaces/freddyaboulton/turn-server-login) and follow the instructions on the page. You just have to click the "Log in" button and then the "Sign Up" button.
|
||||
|
||||

|
||||
|
||||
Then you can use the `get_hf_turn_credentials` helper to get your credentials:
|
||||
|
||||
```python
|
||||
from gradio_webrtc import get_hf_turn_credentials, WebRTC
|
||||
|
||||
# Pass a valid access token for your Hugging Face account
|
||||
# or set the HF_TOKEN environment variable
|
||||
credentials = get_hf_turn_credentials(token=None)
|
||||
|
||||
with gr.Blcocks() as demo:
|
||||
webrtc = WebRTC(rtc_configuration=credentials)
|
||||
...
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
!!! warning
|
||||
|
||||
This is a shared resource so we make no latency/availability guarantees.
|
||||
For more robust options, see the Twilio and self-hosting options below.
|
||||
|
||||
|
||||
## Twilio API
|
||||
|
||||
The easiest way to do this is to use a service like Twilio.
|
||||
|
||||
Create a **free** [account](https://login.twilio.com/u/signup) and the install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
|
||||
|
||||
```python
|
||||
from twilio.rest import Client
|
||||
import os
|
||||
|
||||
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
|
||||
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
|
||||
|
||||
client = Client(account_sid, auth_token)
|
||||
|
||||
token = client.tokens.create()
|
||||
|
||||
rtc_configuration = {
|
||||
"iceServers": token.ice_servers,
|
||||
"iceTransportPolicy": "relay",
|
||||
}
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
...
|
||||
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
|
||||
...
|
||||
```
|
||||
|
||||
!!! tip "Automatic Login"
|
||||
|
||||
You can log in automatically with the `get_twilio_turn_credentials` helper
|
||||
|
||||
```python
|
||||
from gradio_webrtc import get_twilio_turn_credentials
|
||||
|
||||
# Will automatically read the TWILIO_ACCOUNT_SID and TWILIO_AUTH_TOKEN
|
||||
# env variables but you can also pass in the tokens as parameters
|
||||
rtc_configuration = get_twilio_turn_credentials()
|
||||
```
|
||||
|
||||
## Self Hosting
|
||||
|
||||
We have developed a script that can automatically deploy a TURN server to Amazon Web Services (AWS). You can follow the instructions [here](https://github.com/freddyaboulton/turn-server-deploy) or this guide.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
Clone the following [repository](https://github.com/freddyaboulton/turn-server-deploy) and install the `aws` cli if you have not done so already (`pip install awscli`).
|
||||
|
||||
Log into your AWS account and create an IAM user with the following permissions:
|
||||
|
||||
- [AWSCloudFormationFullAccess](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAWSCloudFormationFullAccess)
|
||||
- [AmazonEC2FullAccess](https://us-east-1.console.aws.amazon.com/iam/home?region=us-east-1#/policies/details/arn%3Aaws%3Aiam%3A%3Aaws%3Apolicy%2FAmazonEC2FullAccess)
|
||||
|
||||
|
||||
Create a key pair for this user and write down the "access key" and "secret access key". Then log into the aws cli with these credentials (`aws configure`).
|
||||
|
||||
Finally, create an ec2 keypair (replace `your-key-name` with the name you want to give it).
|
||||
|
||||
```
|
||||
aws ec2 create-key-pair --key-name your-key-name --query 'KeyMaterial' --output text > your-key-name.pem
|
||||
```
|
||||
|
||||
### Running the script
|
||||
|
||||
Open the `parameters.json` file and fill in the correct values for all the parameters:
|
||||
|
||||
- `KeyName`: The key file we just created, e.g. `your-key-name` (omit `.pem`).
|
||||
- `TurnUserName`: The username needed to connect to the server.
|
||||
- `TurnPassword`: The password needed to connect to the server.
|
||||
- `InstanceType`: One of the following values `t3.micro`, `t3.small`, `t3.medium`, `c4.large`, `c5.large`.
|
||||
|
||||
|
||||
Then run the deployment script:
|
||||
|
||||
```bash
|
||||
aws cloudformation create-stack \
|
||||
--stack-name turn-server \
|
||||
--template-body file://deployment.yml \
|
||||
--parameters file://parameters.json \
|
||||
--capabilities CAPABILITY_IAM
|
||||
```
|
||||
|
||||
You can then wait for the stack to come up with:
|
||||
|
||||
```bash
|
||||
aws cloudformation wait stack-create-complete \
|
||||
--stack-name turn-server
|
||||
```
|
||||
|
||||
Next, grab your EC2 server's public ip with:
|
||||
|
||||
```
|
||||
aws cloudformation describe-stacks \
|
||||
--stack-name turn-server \
|
||||
--query 'Stacks[0].Outputs' > server-info.json
|
||||
```
|
||||
|
||||
The `server-info.json` file will have the server's public IP and public DNS:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"OutputKey": "PublicIP",
|
||||
"OutputValue": "35.173.254.80",
|
||||
"Description": "Public IP address of the TURN server"
|
||||
},
|
||||
{
|
||||
"OutputKey": "PublicDNS",
|
||||
"OutputValue": "ec2-35-173-254-80.compute-1.amazonaws.com",
|
||||
"Description": "Public DNS name of the TURN server"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
Finally, you can connect to your EC2 server from the gradio WebRTC component via the `rtc_configuration` argument:
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
rtc_configuration = {
|
||||
"iceServers": [
|
||||
{
|
||||
"urls": "turn:35.173.254.80:80",
|
||||
"username": "<my-username>",
|
||||
"credential": "<my-password>"
|
||||
},
|
||||
]
|
||||
}
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
webrtc = WebRTC(rtc_configuration=rtc_configuration)
|
||||
```
|
||||
67
docs/faq.md
Normal file
67
docs/faq.md
Normal file
@@ -0,0 +1,67 @@
|
||||
## Demo does not work when deploying to the cloud
|
||||
|
||||
Make sure you are using a TURN server. See [deployment](/deployment).
|
||||
|
||||
## Recorded input audio sounds muffled during output audio playback
|
||||
|
||||
By default, the microphone is [configured](https://github.com/freddyaboulton/gradio-webrtc/blob/903f1f70bd586f638ad3b5a3940c7a8ec70ad1f5/backend/gradio_webrtc/webrtc.py#L575) to do echoCancellation.
|
||||
This is what's causing the recorded audio to sound muffled when the streamed audio starts playing.
|
||||
You can disable this via the `track_constraints` (see [advanced configuration](./advanced-configuration])) with the following code:
|
||||
|
||||
```python
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
track_constraints={
|
||||
"echoCancellation": False,
|
||||
"noiseSuppression": {"exact": True},
|
||||
"autoGainControl": {"exact": True},
|
||||
"sampleRate": {"ideal": 24000},
|
||||
"sampleSize": {"ideal": 16},
|
||||
"channelCount": {"exact": 1},
|
||||
},
|
||||
rtc_configuration=None,
|
||||
mode="send-receive",
|
||||
modality="audio",
|
||||
)
|
||||
```
|
||||
|
||||
## How to raise errors in the UI
|
||||
|
||||
You can raise `WebRTCError` in order for an error message to show up in the user's screen. This is similar to how `gr.Error` works.
|
||||
|
||||
Here is a simple example:
|
||||
|
||||
```python
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file(
|
||||
"/Users/freddy/sources/gradio/demo/audio_debugger/cantina.wav"
|
||||
)
|
||||
yield (
|
||||
segment.frame_rate,
|
||||
np.array(segment.get_array_of_samples()).reshape(1, -1),
|
||||
)
|
||||
time.sleep(3.5)
|
||||
raise WebRTCError("This is a test error")
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
mode="receive",
|
||||
modality="audio",
|
||||
)
|
||||
num_steps = gr.Slider(
|
||||
label="Number of Steps",
|
||||
minimum=1,
|
||||
maximum=10,
|
||||
step=1,
|
||||
value=5,
|
||||
)
|
||||
button = gr.Button("Generate")
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio], trigger=button.click
|
||||
)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
30
docs/index.md
Normal file
30
docs/index.md
Normal file
@@ -0,0 +1,30 @@
|
||||
<h1 style='text-align: center; margin-bottom: 1rem; color: white;'> Gradio WebRTC ⚡️ </h1>
|
||||
|
||||
<div style="display: flex; flex-direction: row; justify-content: center">
|
||||
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/gradio_webrtc">
|
||||
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
</div>
|
||||
|
||||
<h3 style='text-align: center'>
|
||||
Stream video and audio in real time with Gradio using WebRTC.
|
||||
</h3>
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc
|
||||
```
|
||||
|
||||
to use built-in pause detection (see [ReplyOnPause](/user-guide/#reply-on-pause)), install the `vad` extra:
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc[vad]
|
||||
```
|
||||
|
||||
For stop word detection (see [ReplyOnStopWords](/user-guide/#reply-on-stopwords)), install the `stopword` extra:
|
||||
```bash
|
||||
pip install gradio_webrtc[stopword]
|
||||
```
|
||||
|
||||
## Examples
|
||||
See the [cookbook](/cookbook)
|
||||
505
docs/user-guide.md
Normal file
505
docs/user-guide.md
Normal file
@@ -0,0 +1,505 @@
|
||||
# User Guide
|
||||
|
||||
To get started with WebRTC streams, all that's needed is to import the `WebRTC` component from this package and implement its `stream` event.
|
||||
|
||||
This page will show how to do so with simple code examples.
|
||||
For complete implementations of common tasks, see the [cookbook](/cookbook).
|
||||
|
||||
## Audio Streaming
|
||||
|
||||
### Reply on Pause
|
||||
|
||||
Typically, you want to run an AI model that generates audio when the user has stopped speaking. This can be done by wrapping a python generator with the `ReplyOnPause` class
|
||||
and passing it to the `stream` event of the `WebRTC` component.
|
||||
|
||||
=== "Code"
|
||||
``` py title="ReplyonPause"
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC, ReplyOnPause
|
||||
|
||||
def response(audio: tuple[int, np.ndarray]): # (1)
|
||||
"""This function must yield audio frames"""
|
||||
...
|
||||
for numpy_array in generated_audio:
|
||||
yield (sampling_rate, numpy_array, "mono") # (2)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Chat (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
mode="send-receive", # (3)
|
||||
modality="audio",
|
||||
)
|
||||
audio.stream(fn=ReplyOnPause(response),
|
||||
inputs=[audio], outputs=[audio], # (4)
|
||||
time_limit=60) # (5)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components.
|
||||
|
||||
2. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio array). Each numpy audio array must have a shape of (1, num_samples).
|
||||
|
||||
3. The `mode` and `modality` arguments must be set to `"send-receive"` and `"audio"`.
|
||||
|
||||
4. The `WebRTC` component must be the first input and output component.
|
||||
|
||||
5. Set a `time_limit` to control how long a conversation will last. If the `concurrency_count` is 1 (default), only one conversation will be handled at a time.
|
||||
=== "Notes"
|
||||
1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components.
|
||||
|
||||
2. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio arrays). Each numpy audio array must have a shape of (1, num_samples).
|
||||
|
||||
3. The `mode` and `modality` arguments must be set to `"send-receive"` and `"audio"`.
|
||||
|
||||
4. The `WebRTC` component must be the first input and output component.
|
||||
|
||||
5. Set a `time_limit` to control how long a conversation will last. If the `concurrency_count` is 1 (default), only one conversation will be handled at a time.
|
||||
|
||||
|
||||
### Reply On Stopwords
|
||||
|
||||
You can configure your AI model to run whenever a set of "stop words" are detected, like "Hey Siri" or "computer", with the `ReplyOnStopWords` class.
|
||||
|
||||
The API is similar to `ReplyOnPause` with the addition of a `stop_words` parameter.
|
||||
|
||||
=== "Code"
|
||||
``` py title="ReplyonPause"
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC, ReplyOnPause
|
||||
|
||||
def response(audio: tuple[int, np.ndarray]):
|
||||
"""This function must yield audio frames"""
|
||||
...
|
||||
for numpy_array in generated_audio:
|
||||
yield (sampling_rate, numpy_array, "mono")
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Chat (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
mode="send",
|
||||
modality="audio",
|
||||
)
|
||||
webrtc.stream(ReplyOnStopWords(generate,
|
||||
input_sample_rate=16000,
|
||||
stop_words=["computer"]), # (1)
|
||||
inputs=[webrtc, history, code],
|
||||
outputs=[webrtc], time_limit=90,
|
||||
concurrency_limit=10)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
|
||||
|
||||
=== "Notes"
|
||||
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
|
||||
|
||||
### Stream Handler
|
||||
|
||||
`ReplyOnPause` is an implementation of a `StreamHandler`. The `StreamHandler` is a low-level
|
||||
abstraction that gives you arbitrary control over how the input audio stream and output audio stream are created. The following example echos back the user audio.
|
||||
|
||||
=== "Code"
|
||||
``` py title="Stream Handler"
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC, StreamHandler
|
||||
from queue import Queue
|
||||
|
||||
class EchoHandler(StreamHandler):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.queue = Queue()
|
||||
|
||||
def receive(self, frame: tuple[int, np.ndarray]) -> None: # (1)
|
||||
self.queue.put(frame)
|
||||
|
||||
def emit(self) -> None: # (2)
|
||||
return self.queue.get()
|
||||
|
||||
def copy(self) -> StreamHandler:
|
||||
return EchoHandler()
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
mode="send-receive",
|
||||
modality="audio",
|
||||
)
|
||||
|
||||
audio.stream(fn=EchoHandler(),
|
||||
inputs=[audio], outputs=[audio],
|
||||
time_limit=15)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The `StreamHandler` class implements three methods: `receive`, `emit` and `copy`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client. The `copy` method is called at the beginning of the stream to ensure each user has a unique stream handler.
|
||||
2. The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return `None`.
|
||||
|
||||
=== "Notes"
|
||||
1. The `StreamHandler` class implements three methods: `receive`, `emit` and `copy`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client. The `copy` method is called at the beginning of the stream to ensure each user has a unique stream handler.
|
||||
2. The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return `None`.
|
||||
|
||||
|
||||
### Async Stream Handlers
|
||||
|
||||
It is also possible to create asynchronous stream handlers. This is very convenient for accessing async APIs from major LLM developers, like Google and OpenAI. The main difference is that `receive` and `emit` are now defined with `async def`.
|
||||
|
||||
Here is a complete example of using `AsyncStreamHandler` for using the Google Gemini real time API:
|
||||
|
||||
=== "Code"
|
||||
``` py title="AsyncStreamHandler"
|
||||
|
||||
import asyncio
|
||||
import base64
|
||||
import logging
|
||||
import os
|
||||
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from google import genai
|
||||
from gradio_webrtc import (
|
||||
AsyncStreamHandler,
|
||||
WebRTC,
|
||||
async_aggregate_bytes_to_16bit,
|
||||
get_twilio_turn_credentials,
|
||||
)
|
||||
|
||||
class GeminiHandler(AsyncStreamHandler):
|
||||
def __init__(
|
||||
self, expected_layout="mono", output_sample_rate=24000, output_frame_size=480
|
||||
) -> None:
|
||||
super().__init__(
|
||||
expected_layout,
|
||||
output_sample_rate,
|
||||
output_frame_size,
|
||||
input_sample_rate=16000,
|
||||
)
|
||||
self.client: genai.Client | None = None
|
||||
self.input_queue = asyncio.Queue()
|
||||
self.output_queue = asyncio.Queue()
|
||||
self.quit = asyncio.Event()
|
||||
|
||||
def copy(self) -> "GeminiHandler":
|
||||
return GeminiHandler(
|
||||
expected_layout=self.expected_layout,
|
||||
output_sample_rate=self.output_sample_rate,
|
||||
output_frame_size=self.output_frame_size,
|
||||
)
|
||||
|
||||
async def stream(self):
|
||||
while not self.quit.is_set():
|
||||
audio = await self.input_queue.get()
|
||||
yield audio
|
||||
|
||||
async def connect(self, api_key: str):
|
||||
client = genai.Client(api_key=api_key, http_options={"api_version": "v1alpha"})
|
||||
config = {"response_modalities": ["AUDIO"]}
|
||||
async with client.aio.live.connect(
|
||||
model="gemini-2.0-flash-exp", config=config
|
||||
) as session:
|
||||
async for audio in session.start_stream(
|
||||
stream=self.stream(), mime_type="audio/pcm"
|
||||
):
|
||||
if audio.data:
|
||||
yield audio.data
|
||||
|
||||
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
|
||||
_, array = frame
|
||||
array = array.squeeze()
|
||||
audio_message = base64.b64encode(array.tobytes()).decode("UTF-8")
|
||||
self.input_queue.put_nowait(audio_message)
|
||||
|
||||
async def generator(self):
|
||||
async for audio_response in async_aggregate_bytes_to_16bit(
|
||||
self.connect(api_key=self.latest_args[1])
|
||||
):
|
||||
self.output_queue.put_nowait(audio_response)
|
||||
|
||||
async def emit(self):
|
||||
if not self.args_set.is_set():
|
||||
await self.wait_for_args()
|
||||
asyncio.create_task(self.generator())
|
||||
|
||||
array = await self.output_queue.get()
|
||||
return (self.output_sample_rate, array)
|
||||
|
||||
def shutdown(self) -> None:
|
||||
self.quit.set()
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<div style='text-align: center'>
|
||||
<h1>Gen AI SDK Voice Chat</h1>
|
||||
<p>Speak with Gemini using real-time audio streaming</p>
|
||||
<p>Get an API Key <a href="https://support.google.com/googleapi/answer/6158862?hl=en">here</a></p>
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Row() as api_key_row:
|
||||
api_key = gr.Textbox(
|
||||
label="API Key",
|
||||
placeholder="Enter your API Key",
|
||||
value=os.getenv("GOOGLE_API_KEY", ""),
|
||||
type="password",
|
||||
)
|
||||
with gr.Row(visible=False) as row:
|
||||
webrtc = WebRTC(
|
||||
label="Audio",
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
rtc_configuration=get_twilio_turn_credentials(),
|
||||
pulse_color="rgb(35, 157, 225)",
|
||||
icon_button_color="rgb(35, 157, 225)",
|
||||
icon="https://www.gstatic.com/lamda/images/gemini_favicon_f069958c85030456e93de685481c559f160ea06b.png",
|
||||
)
|
||||
|
||||
webrtc.stream(
|
||||
GeminiHandler(),
|
||||
inputs=[webrtc, api_key],
|
||||
outputs=[webrtc],
|
||||
time_limit=90,
|
||||
concurrency_limit=2,
|
||||
)
|
||||
api_key.submit(
|
||||
lambda: (gr.update(visible=False), gr.update(visible=True)),
|
||||
None,
|
||||
[api_key_row, row],
|
||||
)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
### Accessing Other Component Values from a StreamHandler
|
||||
|
||||
In the gemini demo above, you'll notice that we have the user input their google API key. This is stored in a `gr.Textbox` parameter.
|
||||
We can access the value of this component via the `latest_args` prop of the `StreamHandler`. The `latest_args` is a list storing the values of each component in the WebRTC `stream` event `inputs` parameter. The value of the `WebRTC` component is the 0th index and it's always the dummy string `__webrtc_value__`.
|
||||
|
||||
In order to fetch the latest value from the user however, we `await self.wait_for_args()`. In a synchronous `StreamHandler`, we would call `self.wait_for_args_sync()`.
|
||||
|
||||
|
||||
### Server-To-Client Only
|
||||
|
||||
To stream only from the server to the client, implement a python generator and pass it to the component's `stream` event. The stream event must also specify a `trigger` corresponding to a UI interaction that starts the stream. In this case, it's a button click.
|
||||
|
||||
=== "Code"
|
||||
|
||||
``` py title="Server-To-CLient"
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
from pydub import AudioSegment
|
||||
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file("audio_file.wav")
|
||||
array = np.array(segment.get_array_of_samples()).reshape(1, -1)
|
||||
yield (segment.frame_rate, array)
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
audio = WebRTC(label="Stream", mode="receive", # (1)
|
||||
modality="audio")
|
||||
num_steps = gr.Slider(label="Number of Steps", minimum=1,
|
||||
maximum=10, step=1, value=5)
|
||||
button = gr.Button("Generate")
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio],
|
||||
trigger=button.click # (2)
|
||||
)
|
||||
```
|
||||
|
||||
1. Set `mode="receive"` to only receive audio from the server.
|
||||
2. The `stream` event must take a `trigger` that corresponds to the gradio event that starts the stream. In this case, it's the button click.
|
||||
=== "Notes"
|
||||
1. Set `mode="receive"` to only receive audio from the server.
|
||||
2. The `stream` event must take a `trigger` that corresponds to the gradio event that starts the stream. In this case, it's the button click.
|
||||
|
||||
## Video Streaming
|
||||
|
||||
### Input/Output Streaming
|
||||
Set up a video Input/Output stream to continuosly receive webcam frames from the user and run an arbitrary python function to return a modified frame.
|
||||
|
||||
=== "Code"
|
||||
|
||||
``` py title="Input/Output Streaming"
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
|
||||
|
||||
def detection(image, conf_threshold=0.3): # (1)
|
||||
... your detection code here ...
|
||||
return modified_frame # (2)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
image = WebRTC(label="Stream", mode="send-receive", modality="video") # (3)
|
||||
conf_threshold = gr.Slider(
|
||||
label="Confidence Threshold",
|
||||
minimum=0.0,
|
||||
maximum=1.0,
|
||||
step=0.05,
|
||||
value=0.30,
|
||||
)
|
||||
image.stream(
|
||||
fn=detection,
|
||||
inputs=[image, conf_threshold], # (4)
|
||||
outputs=[image], time_limit=10
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
|
||||
2. The function must return a numpy array. It can take arbitrary values from other components.
|
||||
3. Set the `modality="video"` and `mode="send-receive"`
|
||||
4. The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
|
||||
=== "Notes"
|
||||
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
|
||||
2. The function must return a numpy array. It can take arbitrary values from other components.
|
||||
3. Set the `modality="video"` and `mode="send-receive"`
|
||||
4. The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
|
||||
|
||||
### Server-to-Client Only
|
||||
|
||||
Set up a server-to-client stream to stream video from an arbitrary user interaction.
|
||||
|
||||
=== "Code"
|
||||
``` py title="Server-To-Client"
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
import cv2
|
||||
|
||||
def generation():
|
||||
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
|
||||
cap = cv2.VideoCapture(url)
|
||||
iterating = True
|
||||
while iterating:
|
||||
iterating, frame = cap.read()
|
||||
yield frame # (1)
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
output_video = WebRTC(label="Video Stream", mode="receive", # (2)
|
||||
modality="video")
|
||||
button = gr.Button("Start", variant="primary")
|
||||
output_video.stream(
|
||||
fn=generation, inputs=None, outputs=[output_video],
|
||||
trigger=button.click # (3)
|
||||
)
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
|
||||
2. Set `mode="receive"` to only receive audio from the server.
|
||||
3. The `trigger` parameter the gradio event that will trigger the stream. In this case, the button click event.
|
||||
=== "Notes"
|
||||
1. The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
|
||||
2. Set `mode="receive"` to only receive audio from the server.
|
||||
3. The `trigger` parameter the gradio event that will trigger the stream. In this case, the button click event.
|
||||
|
||||
## Audio-Video Streaming
|
||||
|
||||
You can simultaneously stream audio and video simultaneously to/from a server using `AudioVideoStreamHandler` or `AsyncAudioVideoStreamHandler`.
|
||||
They are identical to the audio `StreamHandlers` with the addition of `video_receive` and `video_emit` methods which take and return a `numpy` array, respectively.
|
||||
|
||||
Here is an example of the video handling functions for connecting with the Gemini multimodal API. In this case, we simply reflect the webcam feed back to the user but every second we'll send the latest webcam frame (and an additional image component) to the Gemini server.
|
||||
|
||||
Please see the "Gemini Audio Video Chat" example in the [cookbook](/cookbook) for the complete code.
|
||||
|
||||
``` python title="Async Gemini Video Handling"
|
||||
|
||||
async def video_receive(self, frame: np.ndarray):
|
||||
"""Send video frames to the server"""
|
||||
if self.session:
|
||||
# send image every 1 second
|
||||
# otherwise we flood the API
|
||||
if time.time() - self.last_frame_time > 1:
|
||||
self.last_frame_time = time.time()
|
||||
await self.session.send(encode_image(frame))
|
||||
if self.latest_args[2] is not None:
|
||||
await self.session.send(encode_image(self.latest_args[2]))
|
||||
self.video_queue.put_nowait(frame)
|
||||
|
||||
async def video_emit(self) -> VideoEmitType:
|
||||
"""Return video frames to the client"""
|
||||
return await self.video_queue.get()
|
||||
```
|
||||
|
||||
|
||||
## Additional Outputs
|
||||
|
||||
In order to modify other components from within the WebRTC stream, you must yield an instance of `AdditionalOutputs` and add an `on_additional_outputs` event to the `WebRTC` component.
|
||||
|
||||
This is common for displaying a multimodal text/audio conversation in a Chatbot UI.
|
||||
|
||||
=== "Code"
|
||||
|
||||
``` py title="Additional Outputs"
|
||||
from gradio_webrtc import AdditionalOutputs, WebRTC
|
||||
|
||||
def transcribe(audio: tuple[int, np.ndarray],
|
||||
transformers_convo: list[dict],
|
||||
gradio_convo: list[dict]):
|
||||
response = model.generate(**inputs, max_length=256)
|
||||
transformers_convo.append({"role": "assistant", "content": response})
|
||||
gradio_convo.append({"role": "assistant", "content": response})
|
||||
yield AdditionalOutputs(transformers_convo, gradio_convo) # (1)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Talk to Qwen2Audio (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
transformers_convo = gr.State(value=[])
|
||||
with gr.Row():
|
||||
with gr.Column():
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
mode="send", # (2)
|
||||
modality="audio",
|
||||
)
|
||||
with gr.Column():
|
||||
transcript = gr.Chatbot(label="transcript", type="messages")
|
||||
|
||||
audio.stream(ReplyOnPause(transcribe),
|
||||
inputs=[audio, transformers_convo, transcript],
|
||||
outputs=[audio], time_limit=90)
|
||||
audio.on_additional_outputs(lambda s,a: (s,a), # (3)
|
||||
outputs=[transformers_convo, transcript],
|
||||
queue=False, show_progress="hidden")
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. Pass your data to `AdditionalOutputs` and yield it.
|
||||
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
|
||||
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
|
||||
=== "Notes"
|
||||
1. Pass your data to `AdditionalOutputs` and yield it.
|
||||
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
|
||||
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
|
||||
54
docs/utils.md
Normal file
54
docs/utils.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Utils
|
||||
|
||||
## `audio_to_bytes`
|
||||
|
||||
Convert an audio tuple containing sample rate and numpy array data into bytes.
|
||||
Useful for sending data to external APIs from `ReplyOnPause` handler.
|
||||
|
||||
Parameters
|
||||
```
|
||||
audio : tuple[int, np.ndarray]
|
||||
A tuple containing:
|
||||
- sample_rate (int): The audio sample rate in Hz
|
||||
- data (np.ndarray): The audio data as a numpy array
|
||||
```
|
||||
|
||||
Returns
|
||||
```
|
||||
bytes
|
||||
The audio data encoded as bytes, suitable for transmission or storage
|
||||
```
|
||||
|
||||
Example
|
||||
```python
|
||||
>>> sample_rate = 44100
|
||||
>>> audio_data = np.array([0.1, -0.2, 0.3]) # Example audio samples
|
||||
>>> audio_tuple = (sample_rate, audio_data)
|
||||
>>> audio_bytes = audio_to_bytes(audio_tuple)
|
||||
```
|
||||
|
||||
## `audio_to_file`
|
||||
|
||||
Save an audio tuple containing sample rate and numpy array data to a file.
|
||||
|
||||
Parameters
|
||||
```
|
||||
audio : tuple[int, np.ndarray]
|
||||
A tuple containing:
|
||||
- sample_rate (int): The audio sample rate in Hz
|
||||
- data (np.ndarray): The audio data as a numpy array
|
||||
```
|
||||
Returns
|
||||
```
|
||||
str
|
||||
The path to the saved audio file
|
||||
```
|
||||
Example
|
||||
```
|
||||
```python
|
||||
>>> sample_rate = 44100
|
||||
>>> audio_data = np.array([0.1, -0.2, 0.3]) # Example audio samples
|
||||
>>> audio_tuple = (sample_rate, audio_data)
|
||||
>>> file_path = audio_to_file(audio_tuple)
|
||||
>>> print(f"Audio saved to: {file_path}")
|
||||
```
|
||||
Reference in New Issue
Block a user