Video chat temp (#1)

* 视觉更新

---------

Co-authored-by: 杍超 <huangbinchao.hbc@alibaba-inc.com>
Co-authored-by: 款冬 <neil.xh@alibaba-inc.com>
This commit is contained in:
bingochaos
2025-02-14 15:53:54 +08:00
committed by GitHub
parent d567721603
commit 2b19ee493e
23 changed files with 1844 additions and 770 deletions

3
.gitignore vendored
View File

@@ -1,5 +1,6 @@
.eggs/
dist/
dist/*.gz
*.pyc
__pycache__/
*.py[cod]

468
README.md
View File

@@ -7,25 +7,22 @@
</div>
<h3 style='text-align: center'>
Stream video and audio in real time with Gradio using WebRTC.
Stream video and audio in real time with Gradio using WebRTC.
![picture-in-picture](docs/image.png)
![side-by-side](docs/image2.png)
</h3>
## Installation
```bash
pip install gradio_webrtc
gradio cc install
gradio cc build --no-generate-docs
```
to use built-in pause detection (see [ReplyOnPause](https://freddyaboulton.github.io/gradio-webrtc//user-guide/#reply-on-pause)), install the `vad` extra:
```bash
pip install gradio_webrtc[vad]
```
For stop word detection (see [ReplyOnStopWords](https://freddyaboulton.github.io/gradio-webrtc//user-guide/#reply-on-stopwords)), install the `stopword` extra:
```bash
pip install gradio_webrtc[stopword]
pip install dist/gradio_webrtc-0.0.30.dev0-py3-none-any.whl
```
## Docs
@@ -34,385 +31,113 @@ https://freddyaboulton.github.io/gradio-webrtc/
## Examples
<table>
<tr>
<td width="50%">
<h3>🗣️ Audio Input/Output with mini-omni2</h3>
<p>Build a GPT-4o like experience with mini-omni2, an audio-native LLM.</p>
<video width="100%" src="https://github.com/user-attachments/assets/58c06523-fc38-4f5f-a4ba-a02a28e7fa9e" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>🗣️ Talk to Claude</h3>
<p>Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.</p>
<video width="100%" src="https://github.com/user-attachments/assets/650bc492-798e-4995-8cef-159e1cfc2185" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-claude">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-claude/blob/main/app.py">Code</a>
</p>
</td>
</tr>
```python
import asyncio
import base64
from io import BytesIO
<tr>
<td width="50%">
<h3>🗣️ Kyutai Moshi</h3>
<p>Kyutai's moshi is a novel speech-to-speech model for modeling human conversations.</p>
<video width="100%" src="https://github.com/user-attachments/assets/becc7a13-9e89-4a19-9df2-5fb1467a0137" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>🗣️ Hello Llama: Stop Word Detection</h3>
<p>A code editor built with Llama 3.3 70b that is triggered by the phrase "Hello Llama". Build a Siri-like coding assistant in 100 lines of code!</p>
<video width="100%" src="https://github.com/user-attachments/assets/3e10cb15-ff1b-4b17-b141-ff0ad852e613" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor/blob/main/app.py">Code</a>
</p>
</td>
</tr>
<tr>
<td width="50%">
<h3>🤖 Llama Code Editor</h3>
<p>Create and edit HTML pages with just your voice! Powered by SambaNova systems.</p>
<video width="100%" src="https://github.com/user-attachments/assets/a09647f1-33e1-4154-a5a3-ffefda8a736a" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/llama-code-editor">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/llama-code-editor/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>🗣️ Talk to Ultravox</h3>
<p>Talk to Fixie.AI's audio-native Ultravox LLM with the transformers library.</p>
<video width="100%" src="https://github.com/user-attachments/assets/e6e62482-518c-4021-9047-9da14cd82be1" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox/blob/main/app.py">Code</a>
</p>
</td>
</tr>
<tr>
<td width="50%">
<h3>🗣️ Talk to Llama 3.2 3b</h3>
<p>Use the Lepton API to make Llama 3.2 talk back to you!</p>
<video width="100%" src="https://github.com/user-attachments/assets/3ee37a6b-0892-45f5-b801-73188fdfad9a" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>🤖 Talk to Qwen2-Audio</h3>
<p>Qwen2-Audio is a SOTA audio-to-text LLM developed by Alibaba.</p>
<video width="100%" src="https://github.com/user-attachments/assets/c821ad86-44cc-4d0c-8dc4-8c02ad1e5dc8" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc/blob/main/app.py">Code</a>
</p>
</td>
</tr>
<tr>
<td width="50%">
<h3>📷 Yolov10 Object Detection</h3>
<p>Run the Yolov10 model on a user webcam stream in real time!</p>
<video width="100%" src="https://github.com/user-attachments/assets/c90d8c9d-d2d5-462e-9e9b-af969f2ea73c" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>📷 Video Object Detection with RT-DETR</h3>
<p>Upload a video and stream out frames with detected objects (powered by RT-DETR) model.</p>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc/blob/main/app.py">Code</a>
</p>
</td>
</tr>
<tr>
<td width="50%">
<h3>🔊 Text-to-Speech with Parler</h3>
<p>Stream out audio generated by Parler TTS!</p>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
</td>
</tr>
</table>
## Usage
This is an shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
To get started with WebRTC streams, all that's needed is to import the `WebRTC` component from this package and implement its `stream` event.
### Reply on Pause
Typically, you want to run an AI model that generates audio when the user has stopped speaking. This can be done by wrapping a python generator with the `ReplyOnPause` class
and passing it to the `stream` event of the `WebRTC` component.
```py
import gradio as gr
from gradio_webrtc import WebRTC, ReplyOnPause
def response(audio: tuple[int, np.ndarray]): # (1)
"""This function must yield audio frames"""
...
for numpy_array in generated_audio:
yield (sampling_rate, numpy_array, "mono") # (2)
import numpy as np
from gradio_webrtc import (
AsyncAudioVideoStreamHandler,
WebRTC,
VideoEmitType,
AudioEmitType,
)
from PIL import Image
with gr.Blocks() as demo:
gr.HTML(
"""
<h1 style='text-align: center'>
Chat (Powered by WebRTC ⚡️)
</h1>
"""
)
with gr.Column():
with gr.Group():
audio = WebRTC(
mode="send-receive", # (3)
modality="audio",
)
audio.stream(fn=ReplyOnPause(response),
inputs=[audio], outputs=[audio], # (4)
time_limit=60) # (5)
demo.launch()
```
1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components.
2. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio array). Each numpy audio array must have a shape of (1, num_samples).
3. The `mode` and `modality` arguments must be set to `"send-receive"` and `"audio"`.
4. The `WebRTC` component must be the first input and output component.
5. Set a `time_limit` to control how long a conversation will last. If the `concurrency_count` is 1 (default), only one conversation will be handled at a time.
def encode_audio(data: np.ndarray) -> dict:
"""Encode Audio data to send to the server"""
return {"mime_type": "audio/pcm", "data": base64.b64encode(data.tobytes()).decode("UTF-8")}
### Reply On Stopwords
You can configure your AI model to run whenever a set of "stop words" are detected, like "Hey Siri" or "computer", with the `ReplyOnStopWords` class.
The API is similar to `ReplyOnPause` with the addition of a `stop_words` parameter.
def encode_image(data: np.ndarray) -> dict:
with BytesIO() as output_bytes:
pil_image = Image.fromarray(data)
pil_image.save(output_bytes, "JPEG")
bytes_data = output_bytes.getvalue()
base64_str = str(base64.b64encode(bytes_data), "utf-8")
return {"mime_type": "image/jpeg", "data": base64_str}
```py
import gradio as gr
from gradio_webrtc import WebRTC, ReplyOnPause
class VideoChatHandler(AsyncAudioVideoStreamHandler):
def __init__(
self, expected_layout="mono", output_sample_rate=24000, output_frame_size=480
) -> None:
super().__init__(
expected_layout,
output_sample_rate,
output_frame_size,
input_sample_rate=24000,
)
self.audio_queue = asyncio.Queue()
self.video_queue = asyncio.Queue()
self.quit = asyncio.Event()
self.session = None
self.last_frame_time = 0
def response(audio: tuple[int, np.ndarray]):
"""This function must yield audio frames"""
...
for numpy_array in generated_audio:
yield (sampling_rate, numpy_array, "mono")
def copy(self) -> "VideoChatHandler":
return VideoChatHandler(
expected_layout=self.expected_layout,
output_sample_rate=self.output_sample_rate,
output_frame_size=self.output_frame_size,
)
async def video_receive(self, frame: np.ndarray):
newFrame = np.array(frame)
newFrame[0:, :, 0] = 255 - newFrame[0:, :, 0]
self.video_queue.put_nowait(newFrame)
with gr.Blocks() as demo:
gr.HTML(
"""
<h1 style='text-align: center'>
Chat (Powered by WebRTC ⚡️)
</h1>
"""
)
with gr.Column():
with gr.Group():
audio = WebRTC(
mode="send",
modality="audio",
)
webrtc.stream(ReplyOnStopWords(generate,
input_sample_rate=16000,
stop_words=["computer"]), # (1)
inputs=[webrtc, history, code],
outputs=[webrtc], time_limit=90,
concurrency_limit=10)
async def video_emit(self) -> VideoEmitType:
return await self.video_queue.get()
demo.launch()
```
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
frame_size, array = frame
self.audio_queue.put_nowait(array)
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
async def emit(self) -> AudioEmitType:
if not self.args_set.is_set():
await self.wait_for_args()
array = await self.audio_queue.get()
return (self.output_sample_rate, array)
### Audio Server-To-Clien
To stream only from the server to the client, implement a python generator and pass it to the component's `stream` event. The stream event must also specify a `trigger` corresponding to a UI interaction that starts the stream. In this case, it's a button click.
def shutdown(self) -> None:
self.quit.set()
self.connection = None
self.args_set.clear()
self.quit.clear()
```py
import gradio as gr
from gradio_webrtc import WebRTC
from pydub import AudioSegment
css = """
footer {
display: none !important;
}
"""
def generation(num_steps):
for _ in range(num_steps):
segment = AudioSegment.from_file("audio_file.wav")
array = np.array(segment.get_array_of_samples()).reshape(1, -1)
yield (segment.frame_rate, array)
with gr.Blocks(css=css) as demo:
webrtc = WebRTC(
label="Video Chat",
modality="audio-video",
mode="send-receive",
video_chat=True,
elem_id="video-source",
)
webrtc.stream(
VideoChatHandler(),
inputs=[webrtc],
outputs=[webrtc],
time_limit=150,
concurrency_limit=2,
)
with gr.Blocks() as demo:
audio = WebRTC(label="Stream", mode="receive", # (1)
modality="audio")
num_steps = gr.Slider(label="Number of Steps", minimum=1,
maximum=10, step=1, value=5)
button = gr.Button("Generate")
audio.stream(
fn=generation, inputs=[num_steps], outputs=[audio],
trigger=button.click # (2)
)
```
1. Set `mode="receive"` to only receive audio from the server.
2. The `stream` event must take a `trigger` that corresponds to the gradio event that starts the stream. In this case, it's the button click.
### Video Input/Output Streaming
Set up a video Input/Output stream to continuosly receive webcam frames from the user and run an arbitrary python function to return a modified frame.
```py
import gradio as gr
from gradio_webrtc import WebRTC
def detection(image, conf_threshold=0.3): # (1)
... your detection code here ...
return modified_frame # (2)
with gr.Blocks() as demo:
image = WebRTC(label="Stream", mode="send-receive", modality="video") # (3)
conf_threshold = gr.Slider(
label="Confidence Threshold",
minimum=0.0,
maximum=1.0,
step=0.05,
value=0.30,
)
image.stream(
fn=detection,
inputs=[image, conf_threshold], # (4)
outputs=[image], time_limit=10
)
if __name__ == "__main__":
demo.launch()
```
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
2. The function must return a numpy array. It can take arbitrary values from other components.
3. Set the `modality="video"` and `mode="send-receive"`
4. The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
### Server-to-Client Only
Set up a server-to-client stream to stream video from an arbitrary user interaction.
```py
import gradio as gr
from gradio_webrtc import WebRTC
import cv2
def generation():
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
cap = cv2.VideoCapture(url)
iterating = True
while iterating:
iterating, frame = cap.read()
yield frame # (1)
with gr.Blocks() as demo:
output_video = WebRTC(label="Video Stream", mode="receive", # (2)
modality="video")
button = gr.Button("Start", variant="primary")
output_video.stream(
fn=generation, inputs=None, outputs=[output_video],
trigger=button.click # (3)
)
demo.launch()
```
1. The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
2. Set `mode="receive"` to only receive audio from the server.
3. The `trigger` parameter the gradio event that will trigger the stream. In this case, the button click event.
### Additional Outputs
In order to modify other components from within the WebRTC stream, you must yield an instance of `AdditionalOutputs` and add an `on_additional_outputs` event to the `WebRTC` component.
This is common for displaying a multimodal text/audio conversation in a Chatbot UI.
``` py title="Additional Outputs"
from gradio_webrtc import AdditionalOutputs, WebRTC
def transcribe(audio: tuple[int, np.ndarray],
transformers_convo: list[dict],
gradio_convo: list[dict]):
response = model.generate(**inputs, max_length=256)
transformers_convo.append({"role": "assistant", "content": response})
gradio_convo.append({"role": "assistant", "content": response})
yield AdditionalOutputs(transformers_convo, gradio_convo) # (1)
with gr.Blocks() as demo:
gr.HTML(
"""
<h1 style='text-align: center'>
Talk to Qwen2Audio (Powered by WebRTC ⚡️)
</h1>
"""
)
transformers_convo = gr.State(value=[])
with gr.Row():
with gr.Column():
audio = WebRTC(
label="Stream",
mode="send", # (2)
modality="audio",
)
with gr.Column():
transcript = gr.Chatbot(label="transcript", type="messages")
audio.stream(ReplyOnPause(transcribe),
inputs=[audio, transformers_convo, transcript],
outputs=[audio], time_limit=90)
audio.on_additional_outputs(lambda s,a: (s,a), # (3)
outputs=[transformers_convo, transcript],
queue=False, show_progress="hidden")
demo.launch()
```
1. Pass your data to `AdditionalOutputs` and yield it.
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
=== "Notes"
1. Pass your data to `AdditionalOutputs` and yield it.
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
## Deployment
When deploying in a cloud environment (like Hugging Face Spaces, EC2, etc), you need to set up a TURN server to relay the WebRTC traffic.
@@ -438,4 +163,13 @@ with gr.Blocks() as demo:
...
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
...
```
```
## Contributors
[csxh47](https://github.com/xhup)
[bingochaos](https://github.com/bingochaos)
[sudowind](https://github.com/sudowind)
[emililykimura](https://github.com/emililykimura)
[Tony](https://github.com/raidios)
[Cheng Gang](https://github.com/lovepope)

View File

@@ -704,6 +704,7 @@ class WebRTC(Component):
mode: Literal["send-receive", "receive", "send"] = "send-receive",
modality: Literal["video", "audio", "audio-video"] = "video",
show_local_video: Literal['picture-in-picture', 'left-right', None] = None,
video_chat: bool = True,
rtp_params: dict[str, Any] | None = None,
icon: str | None = None,
icon_button_color: str | None = None,
@@ -751,6 +752,7 @@ class WebRTC(Component):
self.mode = mode
self.modality = modality
self.show_local_video = show_local_video
self.video_chat = video_chat
self.icon_button_color = icon_button_color
self.pulse_color = pulse_color
self.rtp_params = rtp_params or {}
@@ -760,6 +762,10 @@ class WebRTC(Component):
"waiting": "",
**(button_labels or {}),
}
if video_chat is True:
# ensure modality and mode when video_chat is True
self.modality = "audio-video"
self.mode = "send-receive"
if track_constraints is None and modality == "audio":
track_constraints = {
"echoCancellation": True,

View File

@@ -1,367 +1,127 @@
import os
import asyncio
import base64
from io import BytesIO
import gradio as gr
_docs = {
"WebRTC": {
"description": "Stream audio/video with WebRTC",
"members": {
"__init__": {
"rtc_configuration": {
"type": "dict[str, Any] | None",
"default": "None",
"description": "The configration dictionary to pass to the RTCPeerConnection constructor. If None, the default configuration is used.",
},
"height": {
"type": "int | str | None",
"default": "None",
"description": "The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
},
"width": {
"type": "int | str | None",
"default": "None",
"description": "The width of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
},
"label": {
"type": "str | None",
"default": "None",
"description": "the label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.",
},
"show_label": {
"type": "bool | None",
"default": "None",
"description": "if True, will display label.",
},
"container": {
"type": "bool",
"default": "True",
"description": "if True, will place the component in a container - providing some extra padding around the border.",
},
"scale": {
"type": "int | None",
"default": "None",
"description": "relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.",
},
"min_width": {
"type": "int",
"default": "160",
"description": "minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.",
},
"interactive": {
"type": "bool | None",
"default": "None",
"description": "if True, will allow users to upload a video; if False, can only be used to display videos. If not provided, this is inferred based on whether the component is used as an input or output.",
},
"visible": {
"type": "bool",
"default": "True",
"description": "if False, component will be hidden.",
},
"elem_id": {
"type": "str | None",
"default": "None",
"description": "an optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.",
},
"elem_classes": {
"type": "list[str] | str | None",
"default": "None",
"description": "an optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.",
},
"render": {
"type": "bool",
"default": "True",
"description": "if False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.",
},
"key": {
"type": "int | str | None",
"default": "None",
"description": "if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.",
},
"mirror_webcam": {
"type": "bool",
"default": "True",
"description": "if True webcam will be mirrored. Default is True.",
},
},
"events": {"tick": {"type": None, "default": None, "description": ""}},
},
"__meta__": {"additional_interfaces": {}, "user_fn_refs": {"WebRTC": []}},
}
}
abs_path = os.path.join(os.path.dirname(__file__), "css.css")
with gr.Blocks(
css_paths=abs_path,
theme=gr.themes.Default(
font_mono=[
gr.themes.GoogleFont("Inconsolata"),
"monospace",
],
),
) as demo:
gr.Markdown(
"""
<h1 style='text-align: center; margin-bottom: 1rem'> Gradio WebRTC ⚡️ </h1>
<div style="display: flex; flex-direction: row; justify-content: center">
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.6%20-%20orange">
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
</div>
""",
elem_classes=["md-custom"],
header_links=True,
)
gr.Markdown(
"""
## Installation
```bash
pip install gradio_webrtc
```
## Examples:
1. [Object Detection from Webcam with YOLOv10](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n) 📷
2. [Streaming Object Detection from Video with RT-DETR](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc) 🎥
3. [Text-to-Speech](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc) 🗣️
4. [Conversational AI](https://huggingface.co/spaces/freddyaboulton/omni-mini-webrtc) 🤖🗣️
## Usage
The WebRTC component supports the following three use cases:
1. [Streaming video from the user webcam to the server and back](#h-streaming-video-from-the-user-webcam-to-the-server-and-back)
2. [Streaming Video from the server to the client](#h-streaming-video-from-the-server-to-the-client)
3. [Streaming Audio from the server to the client](#h-streaming-audio-from-the-server-to-the-client)
4. [Streaming Audio from the client to the server and back (conversational AI)](#h-conversational-ai)
## Streaming Video from the User Webcam to the Server and Back
```python
import gradio as gr
from gradio_webrtc import WebRTC
def detection(image, conf_threshold=0.3):
... your detection code here ...
with gr.Blocks() as demo:
image = WebRTC(label="Stream", mode="send-receive", modality="video")
conf_threshold = gr.Slider(
label="Confidence Threshold",
minimum=0.0,
maximum=1.0,
step=0.05,
value=0.30,
)
image.stream(
fn=detection,
inputs=[image, conf_threshold],
outputs=[image], time_limit=10
)
if __name__ == "__main__":
demo.launch()
```
* Set the `mode` parameter to `send-receive` and `modality` to "video".
* The `stream` event's `fn` parameter is a function that receives the next frame from the webcam
as a **numpy array** and returns the processed frame also as a **numpy array**.
* Numpy arrays are in (height, width, 3) format where the color channels are in RGB format.
* The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
* The `time_limit` parameter is the maximum time in seconds the video stream will run. If the time limit is reached, the video stream will stop.
## Streaming Video from the server to the client
```python
import gradio as gr
from gradio_webrtc import WebRTC
import cv2
def generation():
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
cap = cv2.VideoCapture(url)
iterating = True
while iterating:
iterating, frame = cap.read()
yield frame
with gr.Blocks() as demo:
output_video = WebRTC(label="Video Stream", mode="receive", modality="video")
button = gr.Button("Start", variant="primary")
output_video.stream(
fn=generation, inputs=None, outputs=[output_video],
trigger=button.click
)
if __name__ == "__main__":
demo.launch()
```
* Set the "mode" parameter to "receive" and "modality" to "video".
* The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
* The only output allowed is the WebRTC component.
* The `trigger` parameter the gradio event that will trigger the webrtc connection. In this case, the button click event.
## Streaming Audio from the Server to the Client
```python
import gradio as gr
from pydub import AudioSegment
def generation(num_steps):
for _ in range(num_steps):
segment = AudioSegment.from_file("/Users/freddy/sources/gradio/demo/audio_debugger/cantina.wav")
yield (segment.frame_rate, np.array(segment.get_array_of_samples()).reshape(1, -1))
with gr.Blocks() as demo:
audio = WebRTC(label="Stream", mode="receive", modality="audio")
num_steps = gr.Slider(
label="Number of Steps",
minimum=1,
maximum=10,
step=1,
value=5,
)
button = gr.Button("Generate")
audio.stream(
fn=generation, inputs=[num_steps], outputs=[audio],
trigger=button.click
)
```
* Set the "mode" parameter to "receive" and "modality" to "audio".
* The `stream` event's `fn` parameter is a generator function that yields the next audio segment as a tuple of (frame_rate, audio_samples).
* The numpy array should be of shape (1, num_samples).
* The `outputs` parameter should be a list with the WebRTC component as the only element.
## Conversational AI
```python
import gradio as gr
import numpy as np
from gradio_webrtc import WebRTC, StreamHandler
from queue import Queue
import time
from gradio_webrtc import (
AsyncAudioVideoStreamHandler,
WebRTC,
VideoEmitType,
AudioEmitType,
)
from PIL import Image
class EchoHandler(StreamHandler):
def __init__(self) -> None:
super().__init__()
self.queue = Queue()
def receive(self, frame: tuple[int, np.ndarray] | np.ndarray) -> None:
self.queue.put(frame)
def emit(self) -> None:
return self.queue.get()
def encode_audio(data: np.ndarray) -> dict:
"""Encode Audio data to send to the server"""
return {"mime_type": "audio/pcm", "data": base64.b64encode(data.tobytes()).decode("UTF-8")}
with gr.Blocks() as demo:
with gr.Column():
with gr.Group():
audio = WebRTC(
label="Stream",
rtc_configuration=None,
mode="send-receive",
modality="audio",
)
def encode_image(data: np.ndarray) -> dict:
with BytesIO() as output_bytes:
pil_image = Image.fromarray(data)
pil_image.save(output_bytes, "JPEG")
bytes_data = output_bytes.getvalue()
base64_str = str(base64.b64encode(bytes_data), "utf-8")
return {"mime_type": "image/jpeg", "data": base64_str}
audio.stream(fn=EchoHandler(), inputs=[audio], outputs=[audio], time_limit=15)
class VideoChatHandler(AsyncAudioVideoStreamHandler):
def __init__(
self, expected_layout="mono", output_sample_rate=24000, output_frame_size=480
) -> None:
super().__init__(
expected_layout,
output_sample_rate,
output_frame_size,
input_sample_rate=24000,
)
self.audio_queue = asyncio.Queue()
self.video_queue = asyncio.Queue()
self.quit = asyncio.Event()
self.session = None
self.last_frame_time = 0
def copy(self) -> "VideoChatHandler":
return VideoChatHandler(
expected_layout=self.expected_layout,
output_sample_rate=self.output_sample_rate,
output_frame_size=self.output_frame_size,
)
async def video_receive(self, frame: np.ndarray):
# if self.session:
# # send image every 1 second
# if time.time() - self.last_frame_time > 1:
# self.last_frame_time = time.time()
# await self.session.send(encode_image(frame))
# if self.latest_args[2] is not None:
# await self.session.send(encode_image(self.latest_args[2]))
# print(frame.shape)
newFrame = np.array(frame)
newFrame[0:, :, 0] = 255 - newFrame[0:, :, 0]
self.video_queue.put_nowait(newFrame)
async def video_emit(self) -> VideoEmitType:
return await self.video_queue.get()
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
frame_size, array = frame
self.audio_queue.put_nowait(array)
async def emit(self) -> AudioEmitType:
if not self.args_set.is_set():
await self.wait_for_args()
array = await self.audio_queue.get()
return (self.output_sample_rate, array)
def shutdown(self) -> None:
self.quit.set()
self.connection = None
self.args_set.clear()
self.quit.clear()
css = """
footer {
display: none !important;
}
"""
with gr.Blocks(css=css) as demo:
webrtc = WebRTC(
label="Video Chat",
modality="audio-video",
mode="send-receive",
video_chat=True,
elem_id="video-source",
track_constraints={
"video": {
"facingMode": "user",
"width": {"ideal": 500},
"height": {"ideal": 500},
"frameRate": {"ideal": 30},
},
"audio": {
"echoCancellation": True,
"noiseSuppression": {"exact": True},
"autoGainControl": {"exact": False},
"sampleRate": {"ideal": 24000},
"sampleSize": {"ideal": 16},
"channelCount": {"exact": 1},
},
}
)
webrtc.stream(
VideoChatHandler(),
inputs=[webrtc],
outputs=[webrtc],
time_limit=150,
concurrency_limit=2,
)
if __name__ == "__main__":
demo.launch()
```
* Instead of passing a function to the `stream` event's `fn` parameter, pass a `StreamHandler` implementation. The `StreamHandler` above simply echoes the audio back to the client.
* The `StreamHandler` class has two methods: `receive` and `emit`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client.
* An audio frame is represented as a tuple of (frame_rate, audio_samples) where `audio_samples` is a numpy array of shape (num_channels, num_samples).
* You can also specify the audio layout ("mono" or "stereo") in the emit method by retuning it as the third element of the tuple. If not specified, the default is "mono".
* The `time_limit` parameter is the maximum time in seconds the conversation will run. If the time limit is reached, the audio stream will stop.
* The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return None.
## Deployment
When deploying in a cloud environment (like Hugging Face Spaces, EC2, etc), you need to set up a TURN server to relay the WebRTC traffic.
The easiest way to do this is to use a service like Twilio.
```python
from twilio.rest import Client
import os
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
client = Client(account_sid, auth_token)
token = client.tokens.create()
rtc_configuration = {
"iceServers": token.ice_servers,
"iceTransportPolicy": "relay",
}
with gr.Blocks() as demo:
...
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
...
```
""",
elem_classes=["md-custom"],
header_links=True,
)
gr.Markdown(
"""
##
""",
elem_classes=["md-custom"],
header_links=True,
)
gr.ParamViewer(value=_docs["WebRTC"]["members"]["__init__"], linkify=[])
demo.load(
None,
js=r"""function() {
const refs = {};
const user_fn_refs = {
WebRTC: [], };
requestAnimationFrame(() => {
Object.entries(user_fn_refs).forEach(([key, refs]) => {
if (refs.length > 0) {
const el = document.querySelector(`.${key}-user-fn`);
if (!el) return;
refs.forEach(ref => {
el.innerHTML = el.innerHTML.replace(
new RegExp("\\b"+ref+"\\b", "g"),
`<a href="#h-${ref.toLowerCase()}">${ref}</a>`
);
})
}
})
Object.entries(refs).forEach(([key, refs]) => {
if (refs.length > 0) {
const el = document.querySelector(`.${key}`);
if (!el) return;
refs.forEach(ref => {
el.innerHTML = el.innerHTML.replace(
new RegExp("\\b"+ref+"\\b", "g"),
`<a href="#h-${ref.toLowerCase()}">${ref}</a>`
);
})
}
})
})
}
""",
)
demo.launch()

367
demo/app_back.py Normal file
View File

@@ -0,0 +1,367 @@
import os
import gradio as gr
_docs = {
"WebRTC": {
"description": "Stream audio/video with WebRTC",
"members": {
"__init__": {
"rtc_configuration": {
"type": "dict[str, Any] | None",
"default": "None",
"description": "The configration dictionary to pass to the RTCPeerConnection constructor. If None, the default configuration is used.",
},
"height": {
"type": "int | str | None",
"default": "None",
"description": "The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
},
"width": {
"type": "int | str | None",
"default": "None",
"description": "The width of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
},
"label": {
"type": "str | None",
"default": "None",
"description": "the label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.",
},
"show_label": {
"type": "bool | None",
"default": "None",
"description": "if True, will display label.",
},
"container": {
"type": "bool",
"default": "True",
"description": "if True, will place the component in a container - providing some extra padding around the border.",
},
"scale": {
"type": "int | None",
"default": "None",
"description": "relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.",
},
"min_width": {
"type": "int",
"default": "160",
"description": "minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.",
},
"interactive": {
"type": "bool | None",
"default": "None",
"description": "if True, will allow users to upload a video; if False, can only be used to display videos. If not provided, this is inferred based on whether the component is used as an input or output.",
},
"visible": {
"type": "bool",
"default": "True",
"description": "if False, component will be hidden.",
},
"elem_id": {
"type": "str | None",
"default": "None",
"description": "an optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.",
},
"elem_classes": {
"type": "list[str] | str | None",
"default": "None",
"description": "an optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.",
},
"render": {
"type": "bool",
"default": "True",
"description": "if False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.",
},
"key": {
"type": "int | str | None",
"default": "None",
"description": "if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.",
},
"mirror_webcam": {
"type": "bool",
"default": "True",
"description": "if True webcam will be mirrored. Default is True.",
},
},
"events": {"tick": {"type": None, "default": None, "description": ""}},
},
"__meta__": {"additional_interfaces": {}, "user_fn_refs": {"WebRTC": []}},
}
}
abs_path = os.path.join(os.path.dirname(__file__), "css.css")
with gr.Blocks(
css_paths=abs_path,
theme=gr.themes.Default(
font_mono=[
gr.themes.GoogleFont("Inconsolata"),
"monospace",
],
),
) as demo:
gr.Markdown(
"""
<h1 style='text-align: center; margin-bottom: 1rem'> Gradio WebRTC ⚡️ </h1>
<div style="display: flex; flex-direction: row; justify-content: center">
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.6%20-%20orange">
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
</div>
""",
elem_classes=["md-custom"],
header_links=True,
)
gr.Markdown(
"""
## Installation
```bash
pip install gradio_webrtc
```
## Examples:
1. [Object Detection from Webcam with YOLOv10](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n) 📷
2. [Streaming Object Detection from Video with RT-DETR](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc) 🎥
3. [Text-to-Speech](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc) 🗣️
4. [Conversational AI](https://huggingface.co/spaces/freddyaboulton/omni-mini-webrtc) 🤖🗣️
## Usage
The WebRTC component supports the following three use cases:
1. [Streaming video from the user webcam to the server and back](#h-streaming-video-from-the-user-webcam-to-the-server-and-back)
2. [Streaming Video from the server to the client](#h-streaming-video-from-the-server-to-the-client)
3. [Streaming Audio from the server to the client](#h-streaming-audio-from-the-server-to-the-client)
4. [Streaming Audio from the client to the server and back (conversational AI)](#h-conversational-ai)
## Streaming Video from the User Webcam to the Server and Back
```python
import gradio as gr
from gradio_webrtc import WebRTC
def detection(image, conf_threshold=0.3):
... your detection code here ...
with gr.Blocks() as demo:
image = WebRTC(label="Stream", mode="send-receive", modality="video")
conf_threshold = gr.Slider(
label="Confidence Threshold",
minimum=0.0,
maximum=1.0,
step=0.05,
value=0.30,
)
image.stream(
fn=detection,
inputs=[image, conf_threshold],
outputs=[image], time_limit=10
)
if __name__ == "__main__":
demo.launch()
```
* Set the `mode` parameter to `send-receive` and `modality` to "video".
* The `stream` event's `fn` parameter is a function that receives the next frame from the webcam
as a **numpy array** and returns the processed frame also as a **numpy array**.
* Numpy arrays are in (height, width, 3) format where the color channels are in RGB format.
* The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
* The `time_limit` parameter is the maximum time in seconds the video stream will run. If the time limit is reached, the video stream will stop.
## Streaming Video from the server to the client
```python
import gradio as gr
from gradio_webrtc import WebRTC
import cv2
def generation():
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
cap = cv2.VideoCapture(url)
iterating = True
while iterating:
iterating, frame = cap.read()
yield frame
with gr.Blocks() as demo:
output_video = WebRTC(label="Video Stream", mode="receive", modality="video")
button = gr.Button("Start", variant="primary")
output_video.stream(
fn=generation, inputs=None, outputs=[output_video],
trigger=button.click
)
if __name__ == "__main__":
demo.launch()
```
* Set the "mode" parameter to "receive" and "modality" to "video".
* The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
* The only output allowed is the WebRTC component.
* The `trigger` parameter the gradio event that will trigger the webrtc connection. In this case, the button click event.
## Streaming Audio from the Server to the Client
```python
import gradio as gr
from pydub import AudioSegment
def generation(num_steps):
for _ in range(num_steps):
segment = AudioSegment.from_file("/Users/freddy/sources/gradio/demo/audio_debugger/cantina.wav")
yield (segment.frame_rate, np.array(segment.get_array_of_samples()).reshape(1, -1))
with gr.Blocks() as demo:
audio = WebRTC(label="Stream", mode="receive", modality="audio")
num_steps = gr.Slider(
label="Number of Steps",
minimum=1,
maximum=10,
step=1,
value=5,
)
button = gr.Button("Generate")
audio.stream(
fn=generation, inputs=[num_steps], outputs=[audio],
trigger=button.click
)
```
* Set the "mode" parameter to "receive" and "modality" to "audio".
* The `stream` event's `fn` parameter is a generator function that yields the next audio segment as a tuple of (frame_rate, audio_samples).
* The numpy array should be of shape (1, num_samples).
* The `outputs` parameter should be a list with the WebRTC component as the only element.
## Conversational AI
```python
import gradio as gr
import numpy as np
from gradio_webrtc import WebRTC, StreamHandler
from queue import Queue
import time
class EchoHandler(StreamHandler):
def __init__(self) -> None:
super().__init__()
self.queue = Queue()
def receive(self, frame: tuple[int, np.ndarray] | np.ndarray) -> None:
self.queue.put(frame)
def emit(self) -> None:
return self.queue.get()
with gr.Blocks() as demo:
with gr.Column():
with gr.Group():
audio = WebRTC(
label="Stream",
rtc_configuration=None,
mode="send-receive",
modality="audio",
)
audio.stream(fn=EchoHandler(), inputs=[audio], outputs=[audio], time_limit=15)
if __name__ == "__main__":
demo.launch()
```
* Instead of passing a function to the `stream` event's `fn` parameter, pass a `StreamHandler` implementation. The `StreamHandler` above simply echoes the audio back to the client.
* The `StreamHandler` class has two methods: `receive` and `emit`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client.
* An audio frame is represented as a tuple of (frame_rate, audio_samples) where `audio_samples` is a numpy array of shape (num_channels, num_samples).
* You can also specify the audio layout ("mono" or "stereo") in the emit method by retuning it as the third element of the tuple. If not specified, the default is "mono".
* The `time_limit` parameter is the maximum time in seconds the conversation will run. If the time limit is reached, the audio stream will stop.
* The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return None.
## Deployment
When deploying in a cloud environment (like Hugging Face Spaces, EC2, etc), you need to set up a TURN server to relay the WebRTC traffic.
The easiest way to do this is to use a service like Twilio.
```python
from twilio.rest import Client
import os
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
client = Client(account_sid, auth_token)
token = client.tokens.create()
rtc_configuration = {
"iceServers": token.ice_servers,
"iceTransportPolicy": "relay",
}
with gr.Blocks() as demo:
...
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
...
```
""",
elem_classes=["md-custom"],
header_links=True,
)
gr.Markdown(
"""
##
""",
elem_classes=["md-custom"],
header_links=True,
)
gr.ParamViewer(value=_docs["WebRTC"]["members"]["__init__"], linkify=[])
demo.load(
None,
js=r"""function() {
const refs = {};
const user_fn_refs = {
WebRTC: [], };
requestAnimationFrame(() => {
Object.entries(user_fn_refs).forEach(([key, refs]) => {
if (refs.length > 0) {
const el = document.querySelector(`.${key}-user-fn`);
if (!el) return;
refs.forEach(ref => {
el.innerHTML = el.innerHTML.replace(
new RegExp("\\b"+ref+"\\b", "g"),
`<a href="#h-${ref.toLowerCase()}">${ref}</a>`
);
})
}
})
Object.entries(refs).forEach(([key, refs]) => {
if (refs.length > 0) {
const el = document.querySelector(`.${key}`);
if (!el) return;
refs.forEach(ref => {
el.innerHTML = el.innerHTML.replace(
new RegExp("\\b"+ref+"\\b", "g"),
`<a href="#h-${ref.toLowerCase()}">${ref}</a>`
);
})
}
})
})
}
""",
)
demo.launch()

View File

@@ -107,7 +107,7 @@ with gr.Blocks(css=css) as demo:
GeminiHandler(),
inputs=[webrtc],
outputs=[webrtc],
time_limit=150,
time_limit=90,
concurrency_limit=2,
)

View File

@@ -48,10 +48,10 @@ else:
rtc_configuration = None
def detection(image, conf_threshold=0.3):
image = cv2.resize(image, (model.input_width, model.input_height))
new_image = model.detect_objects(image, conf_threshold)
return cv2.resize(new_image, (500, 500))
def detection(frame, conf_threshold=0.3):
print("frame.shape", frame.shape)
frame = cv2.flip(frame, 0)
return AdditionalOutputs(1)
css = """.my-group {max-width: 600px !important; max-height: 600 !important;}
@@ -75,16 +75,13 @@ with gr.Blocks(css=css) as demo:
with gr.Column(elem_classes=["my-column"]):
with gr.Group(elem_classes=["my-group"]):
image = WebRTC(
label="Stream",
rtc_configuration=rtc_configuration,
mode="send-receive",
modality="video",
track_constraints={
"width": {"exact": 800},
"height": {"exact": 600},
"aspectRatio": {"exact": 1.33333},
},
rtp_params={"degradationPreference": "maintain-resolution"},
label="Stream", rtc_configuration=rtc_configuration,
mode="send",
track_constraints={"width": {"exact": 800},
"height": {"exact": 600},
"aspectRatio": {"exact": 1.33333}
},
rtp_params={"degradationPreference": "maintain-resolution"}
)
conf_threshold = gr.Slider(
label="Confidence Threshold",
@@ -96,7 +93,8 @@ with gr.Blocks(css=css) as demo:
number = gr.Number()
image.stream(
fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=90
fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=10
)
image.on_additional_outputs(lambda n: n, outputs=number)
demo.launch()

Binary file not shown.

BIN
docs/image.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 367 KiB

BIN
docs/image2.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 938 KiB

View File

@@ -13,7 +13,8 @@ Typically, you want to run an AI model that generates audio when the user has st
and passing it to the `stream` event of the `WebRTC` component.
=== "Code"
``` py title="ReplyonPause"
````py title="ReplyonPause"
import gradio as gr
from gradio_webrtc import WebRTC, ReplyOnPause
@@ -195,7 +196,6 @@ Here is a complete example of using `AsyncStreamHandler` for using the Google Ge
self.input_queue = asyncio.Queue()
self.output_queue = asyncio.Queue()
self.quit = asyncio.Event()
self.connected = asyncio.Event()
def copy(self) -> "GeminiHandler":
return GeminiHandler(
@@ -215,7 +215,6 @@ Here is a complete example of using `AsyncStreamHandler` for using the Google Ge
async with client.aio.live.connect(
model="gemini-2.0-flash-exp", config=config
) as session:
self.connected.set()
async for audio in session.start_stream(
stream=self.stream(), mime_type="audio/pcm"
):
@@ -237,10 +236,7 @@ Here is a complete example of using `AsyncStreamHandler` for using the Google Ge
async def emit(self):
if not self.args_set.is_set():
await self.wait_for_args()
if not self.connected.is_set():
asyncio.create_task(self.generator())
await self.connected.wait()
asyncio.create_task(self.generator())
array = await self.output_queue.get()
return (self.output_sample_rate, array)
@@ -439,7 +435,7 @@ async def video_receive(self, frame: np.ndarray):
async def video_emit(self) -> VideoEmitType:
"""Return video frames to the client"""
return await self.video_queue.get()
```
````
## Additional Outputs

View File

@@ -8,7 +8,7 @@
import StaticVideo from "./shared/StaticVideo.svelte";
import StaticAudio from "./shared/StaticAudio.svelte";
import InteractiveAudio from "./shared/InteractiveAudio.svelte";
import VideoChat from './shared/VideoChat.svelte'
export let elem_id = "";
export let elem_classes: string[] = [];
export let visible = true;
@@ -34,6 +34,7 @@
export let modality: "video" | "audio" | "audio-video" = "video";
export let mode: "send-receive" | "receive" | "send" = "send-receive";
export let show_local_video: string | undefined = undefined;
export let video_chat: boolean = false;
export let rtp_params: RTCRtpParameters = {} as RTCRtpParameters;
export let track_constraints: MediaTrackConstraints = {};
export let icon: string | undefined = undefined;
@@ -53,7 +54,41 @@
$: console.log("value", value);
</script>
<Block
{#if video_chat}
<Block
{visible}
variant={"solid"}
border_mode={dragging ? "focus" : "base"}
padding={false}
{elem_id}
{elem_classes}
{height}
{width}
{container}
{scale}
{min_width}
allow_overflow={false}>
<VideoChat {server} bind:webrtc_id={value}
on:clear={() => gradio.dispatch("clear")}
on:play={() => gradio.dispatch("play")}
on:pause={() => gradio.dispatch("pause")}
on:upload={() => gradio.dispatch("upload")}
on:stop={() => gradio.dispatch("stop")}
on:end={() => gradio.dispatch("end")}
on:start_recording={() => gradio.dispatch("start_recording")}
on:stop_recording={() => gradio.dispatch("stop_recording")}
on:tick={() => gradio.dispatch("tick")}
on:error={({ detail }) => gradio.dispatch("error", detail)}
i18n={gradio.i18n}
stream_handler={(...args) => gradio.client.stream(...args)}
{track_constraints}
{height}
{on_change_cb} {rtc_configuration}
on:tick={() => gradio.dispatch("tick")}
on:error={({ detail }) => gradio.dispatch("error", detail)}></VideoChat>
</Block>
{:else}<Block
{visible}
variant={"solid"}
border_mode={dragging ? "focus" : "base"}
@@ -158,3 +193,4 @@
/>
{/if}
</Block>
{/if}

View File

@@ -1,5 +1,23 @@
// export default {
// plugins: [],
// svelte: {
// preprocess: [],
// },
// build: {
// target: "modules",
// },
// };
import commonjs from "vite-plugin-commonjs";
export default {
plugins: [],
plugins: [
commonjs({
filter(id) {
return id.includes("node_modules/deepmerge");
},
}),
],
svelte: {
preprocess: [],
},

View File

@@ -25,7 +25,9 @@
},
"devDependencies": {
"@gradio/preview": "0.12.0",
"prettier": "3.3.3"
"less": "^4.2.2",
"prettier": "3.3.3",
"vite-plugin-commonjs": "^0.10.4"
},
"peerDependencies": {
"svelte": "^4.0.0"
@@ -982,6 +984,44 @@
"node": ">=18"
}
},
"node_modules/@nodelib/fs.scandir": {
"version": "2.1.5",
"resolved": "https://registry.anpm.alibaba-inc.com/@nodelib/fs.scandir/-/fs.scandir-2.1.5.tgz",
"integrity": "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g==",
"dev": true,
"license": "MIT",
"dependencies": {
"@nodelib/fs.stat": "2.0.5",
"run-parallel": "^1.1.9"
},
"engines": {
"node": ">= 8"
}
},
"node_modules/@nodelib/fs.stat": {
"version": "2.0.5",
"resolved": "https://registry.anpm.alibaba-inc.com/@nodelib/fs.stat/-/fs.stat-2.0.5.tgz",
"integrity": "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 8"
}
},
"node_modules/@nodelib/fs.walk": {
"version": "1.2.8",
"resolved": "https://registry.anpm.alibaba-inc.com/@nodelib/fs.walk/-/fs.walk-1.2.8.tgz",
"integrity": "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@nodelib/fs.scandir": "2.1.5",
"fastq": "^1.6.0"
},
"engines": {
"node": ">= 8"
}
},
"node_modules/@open-draft/deferred-promise": {
"version": "2.2.0",
"resolved": "https://registry.npmjs.org/@open-draft/deferred-promise/-/deferred-promise-2.2.0.tgz",
@@ -2071,6 +2111,19 @@
"node": ">= 0.6"
}
},
"node_modules/copy-anything": {
"version": "2.0.6",
"resolved": "https://registry.anpm.alibaba-inc.com/copy-anything/-/copy-anything-2.0.6.tgz",
"integrity": "sha512-1j20GZTsvKNkc4BY3NpMOM8tt///wY3FpIzozTOFO2ffuZcV61nojHXVKIy3WM+7ADCy5FVhdZYHYDdgTU0yJw==",
"dev": true,
"license": "MIT",
"dependencies": {
"is-what": "^3.14.1"
},
"funding": {
"url": "https://github.com/sponsors/mesqueeb"
}
},
"node_modules/cropperjs": {
"version": "1.6.2",
"resolved": "https://registry.npmjs.org/cropperjs/-/cropperjs-1.6.2.tgz",
@@ -2357,6 +2410,20 @@
"url": "https://github.com/fb55/entities?sponsor=1"
}
},
"node_modules/errno": {
"version": "0.1.8",
"resolved": "https://registry.anpm.alibaba-inc.com/errno/-/errno-0.1.8.tgz",
"integrity": "sha512-dJ6oBr5SQ1VSd9qkk7ByRgb/1SH4JZjCHSW/mr63/QcXO9zLVxvJ6Oy13nio03rxpSnVDDjFor75SjVeZWPW/A==",
"dev": true,
"license": "MIT",
"optional": true,
"dependencies": {
"prr": "~1.0.1"
},
"bin": {
"errno": "cli.js"
}
},
"node_modules/es-define-property": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.0.tgz",
@@ -2378,6 +2445,13 @@
"node": ">= 0.4"
}
},
"node_modules/es-module-lexer": {
"version": "1.6.0",
"resolved": "https://registry.anpm.alibaba-inc.com/es-module-lexer/-/es-module-lexer-1.6.0.tgz",
"integrity": "sha512-qqnD1yMU6tk/jnaMosogGySTZP8YtUgAffA9nMN+E/rjxcfRQ6IEk7IiozUjgxKoFHBGjTLnrHB/YC45r/59EQ==",
"dev": true,
"license": "MIT"
},
"node_modules/es5-ext": {
"version": "0.10.64",
"resolved": "https://registry.npmjs.org/es5-ext/-/es5-ext-0.10.64.tgz",
@@ -2851,6 +2925,33 @@
"type": "^2.7.2"
}
},
"node_modules/fast-glob": {
"version": "3.3.3",
"resolved": "https://registry.anpm.alibaba-inc.com/fast-glob/-/fast-glob-3.3.3.tgz",
"integrity": "sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg==",
"dev": true,
"license": "MIT",
"dependencies": {
"@nodelib/fs.stat": "^2.0.2",
"@nodelib/fs.walk": "^1.2.3",
"glob-parent": "^5.1.2",
"merge2": "^1.3.0",
"micromatch": "^4.0.8"
},
"engines": {
"node": ">=8.6.0"
}
},
"node_modules/fastq": {
"version": "1.18.0",
"resolved": "https://registry.anpm.alibaba-inc.com/fastq/-/fastq-1.18.0.tgz",
"integrity": "sha512-QKHXPW0hD8g4UET03SdOdunzSouc9N4AuHdsX8XNcTsuz+yYFILVNIX4l9yHABMhiEI9Db0JTTIpu0wB+Y1QQw==",
"dev": true,
"license": "ISC",
"dependencies": {
"reusify": "^1.0.4"
}
},
"node_modules/fetch-event-stream": {
"version": "0.1.5",
"resolved": "https://registry.npmjs.org/fetch-event-stream/-/fetch-event-stream-0.1.5.tgz",
@@ -2979,6 +3080,19 @@
"url": "https://github.com/sponsors/isaacs"
}
},
"node_modules/glob-parent": {
"version": "5.1.2",
"resolved": "https://registry.anpm.alibaba-inc.com/glob-parent/-/glob-parent-5.1.2.tgz",
"integrity": "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow==",
"dev": true,
"license": "ISC",
"dependencies": {
"is-glob": "^4.0.1"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/globalyzer": {
"version": "0.1.0",
"resolved": "https://registry.npmjs.org/globalyzer/-/globalyzer-0.1.0.tgz",
@@ -3160,6 +3274,20 @@
"node": ">=0.10.0"
}
},
"node_modules/image-size": {
"version": "0.5.5",
"resolved": "https://registry.anpm.alibaba-inc.com/image-size/-/image-size-0.5.5.tgz",
"integrity": "sha512-6TDAlDPZxUFCv+fuOkIoXT/V/f3Qbq8e37p+YOiYrUv3v9cc3/6x78VdfPgFVaB9dZYeLUfKgHRebpkm/oP2VQ==",
"dev": true,
"license": "MIT",
"optional": true,
"bin": {
"image-size": "bin/image-size.js"
},
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/immutable": {
"version": "4.3.7",
"resolved": "https://registry.npmjs.org/immutable/-/immutable-4.3.7.tgz",
@@ -3304,6 +3432,13 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/is-what": {
"version": "3.14.1",
"resolved": "https://registry.anpm.alibaba-inc.com/is-what/-/is-what-3.14.1.tgz",
"integrity": "sha512-sNxgpk9793nzSs7bA6JQJGeIuRBQhAaNGG77kzYQgMkrID+lS6SlK07K5LaptscDlSaIgH+GPFzf+d75FVxozA==",
"dev": true,
"license": "MIT"
},
"node_modules/isexe": {
"version": "3.1.1",
"resolved": "https://registry.npmjs.org/isexe/-/isexe-3.1.1.tgz",
@@ -3425,6 +3560,44 @@
"resolved": "https://registry.npmjs.org/lazy-brush/-/lazy-brush-1.0.1.tgz",
"integrity": "sha512-xT/iSClTVi7vLoF8dCWTBhCuOWqsLXCMPa6ucVmVAk6hyNCM5JeS1NLhXqIrJktUg+caEYKlqSOUU4u3cpXzKg=="
},
"node_modules/less": {
"version": "4.2.2",
"resolved": "https://registry.anpm.alibaba-inc.com/less/-/less-4.2.2.tgz",
"integrity": "sha512-tkuLHQlvWUTeQ3doAqnHbNn8T6WX1KA8yvbKG9x4VtKtIjHsVKQZCH11zRgAfbDAXC2UNIg/K9BYAAcEzUIrNg==",
"dev": true,
"license": "Apache-2.0",
"dependencies": {
"copy-anything": "^2.0.1",
"parse-node-version": "^1.0.1",
"tslib": "^2.3.0"
},
"bin": {
"lessc": "bin/lessc"
},
"engines": {
"node": ">=6"
},
"optionalDependencies": {
"errno": "^0.1.1",
"graceful-fs": "^4.1.2",
"image-size": "~0.5.0",
"make-dir": "^2.1.0",
"mime": "^1.4.1",
"needle": "^3.1.0",
"source-map": "~0.6.0"
}
},
"node_modules/less/node_modules/source-map": {
"version": "0.6.1",
"resolved": "https://registry.anpm.alibaba-inc.com/source-map/-/source-map-0.6.1.tgz",
"integrity": "sha512-UjgapumWlbMhkBgzT7Ykc5YXUT46F0iKu8SGXq0bcwP5dz/h0Plj6enJqjz1Zbq2l5WaqYnrVbwWOWMyF3F47g==",
"dev": true,
"license": "BSD-3-Clause",
"optional": true,
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/lightningcss": {
"version": "1.27.0",
"resolved": "https://registry.npmjs.org/lightningcss/-/lightningcss-1.27.0.tgz",
@@ -3686,6 +3859,32 @@
"@jridgewell/sourcemap-codec": "^1.5.0"
}
},
"node_modules/make-dir": {
"version": "2.1.0",
"resolved": "https://registry.anpm.alibaba-inc.com/make-dir/-/make-dir-2.1.0.tgz",
"integrity": "sha512-LS9X+dc8KLxXCb8dni79fLIIUA5VyZoyjSMCwTluaXA0o27cCK0bhXkpgw+sTXVpPy/lSO57ilRixqk0vDmtRA==",
"dev": true,
"license": "MIT",
"optional": true,
"dependencies": {
"pify": "^4.0.1",
"semver": "^5.6.0"
},
"engines": {
"node": ">=6"
}
},
"node_modules/make-dir/node_modules/semver": {
"version": "5.7.2",
"resolved": "https://registry.anpm.alibaba-inc.com/semver/-/semver-5.7.2.tgz",
"integrity": "sha512-cBznnQ9KjJqU67B52RMC65CMarK2600WFnbkcaiwWq3xy/5haFJlshgnpjovMVJ+Hff49d8GEn0b87C5pDQ10g==",
"dev": true,
"license": "ISC",
"optional": true,
"bin": {
"semver": "bin/semver"
}
},
"node_modules/marked": {
"version": "12.0.2",
"resolved": "https://registry.npmjs.org/marked/-/marked-12.0.2.tgz",
@@ -3739,6 +3938,16 @@
"node": ">=0.12"
}
},
"node_modules/merge2": {
"version": "1.4.1",
"resolved": "https://registry.anpm.alibaba-inc.com/merge2/-/merge2-1.4.1.tgz",
"integrity": "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 8"
}
},
"node_modules/micromatch": {
"version": "4.0.8",
"resolved": "https://registry.npmjs.org/micromatch/-/micromatch-4.0.8.tgz",
@@ -3752,6 +3961,20 @@
"node": ">=8.6"
}
},
"node_modules/mime": {
"version": "1.6.0",
"resolved": "https://registry.anpm.alibaba-inc.com/mime/-/mime-1.6.0.tgz",
"integrity": "sha512-x0Vn8spI+wuJ1O6S7gnbaQg8Pxh4NNHb7KSINmEWKiPE4RKOplvijn+NkmYmmRgP68mc70j2EbeTFRsrswaQeg==",
"dev": true,
"license": "MIT",
"optional": true,
"bin": {
"mime": "cli.js"
},
"engines": {
"node": ">=4"
}
},
"node_modules/mime-db": {
"version": "1.52.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
@@ -3921,6 +4144,24 @@
"node": "^10 || ^12 || ^13.7 || ^14 || >=15.0.1"
}
},
"node_modules/needle": {
"version": "3.3.1",
"resolved": "https://registry.anpm.alibaba-inc.com/needle/-/needle-3.3.1.tgz",
"integrity": "sha512-6k0YULvhpw+RoLNiQCRKOl09Rv1dPLr8hHnVjHqdolKwDrdNyk+Hmrthi4lIGPPz3r39dLx0hsF5s40sZ3Us4Q==",
"dev": true,
"license": "MIT",
"optional": true,
"dependencies": {
"iconv-lite": "^0.6.3",
"sax": "^1.2.4"
},
"bin": {
"needle": "bin/needle"
},
"engines": {
"node": ">= 4.4.x"
}
},
"node_modules/next-tick": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/next-tick/-/next-tick-1.1.0.tgz",
@@ -3976,6 +4217,16 @@
"integrity": "sha512-UEZIS3/by4OC8vL3P2dTXRETpebLI2NiI5vIrjaD/5UtrkFX/tNbwjTSRAGC/+7CAo2pIcBaRgWmcBBHcsaCIw==",
"dev": true
},
"node_modules/parse-node-version": {
"version": "1.0.1",
"resolved": "https://registry.anpm.alibaba-inc.com/parse-node-version/-/parse-node-version-1.0.1.tgz",
"integrity": "sha512-3YHlOa/JgH6Mnpr05jP9eDG254US9ek25LyIxZlDItp2iJtwyaXQb57lBYLdT3MowkUFYEV2XXNAYIPlESvJlA==",
"dev": true,
"license": "MIT",
"engines": {
"node": ">= 0.10"
}
},
"node_modules/parse-srcset": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/parse-srcset/-/parse-srcset-1.0.2.tgz",
@@ -4077,6 +4328,17 @@
"url": "https://github.com/sponsors/jonschlinkert"
}
},
"node_modules/pify": {
"version": "4.0.1",
"resolved": "https://registry.anpm.alibaba-inc.com/pify/-/pify-4.0.1.tgz",
"integrity": "sha512-uB80kBFb/tfd68bVleG9T5GGsGPjJrLAUpR5PZIrhBnIaRTQRjqdJSsIKkOP6OAIFbj7GOrcudc5pNjZ+geV2g==",
"dev": true,
"license": "MIT",
"optional": true,
"engines": {
"node": ">=6"
}
},
"node_modules/pirates": {
"version": "4.0.6",
"resolved": "https://registry.npmjs.org/pirates/-/pirates-4.0.6.tgz",
@@ -4145,6 +4407,14 @@
"asap": "~2.0.3"
}
},
"node_modules/prr": {
"version": "1.0.1",
"resolved": "https://registry.anpm.alibaba-inc.com/prr/-/prr-1.0.1.tgz",
"integrity": "sha512-yPw4Sng1gWghHQWj0B3ZggWUm4qVbPwPFcRG8KyxiU7J2OHFSoEHKS+EZ3fv5l1t9CyCiop6l/ZYeWbrgoQejw==",
"dev": true,
"license": "MIT",
"optional": true
},
"node_modules/psl": {
"version": "1.9.0",
"resolved": "https://registry.npmjs.org/psl/-/psl-1.9.0.tgz",
@@ -4298,6 +4568,27 @@
"resolved": "https://registry.npmjs.org/querystringify/-/querystringify-2.2.0.tgz",
"integrity": "sha512-FIqgj2EUvTa7R50u0rGsyTftzjYmv/a3hO345bZNrqabNqjtgiDMgmo4mkUjd+nzU5oF3dClKqFIPUKybUyqoQ=="
},
"node_modules/queue-microtask": {
"version": "1.2.3",
"resolved": "https://registry.anpm.alibaba-inc.com/queue-microtask/-/queue-microtask-1.2.3.tgz",
"integrity": "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A==",
"dev": true,
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
],
"license": "MIT"
},
"node_modules/readdirp": {
"version": "4.0.2",
"resolved": "https://registry.npmjs.org/readdirp/-/readdirp-4.0.2.tgz",
@@ -4346,6 +4637,17 @@
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/reusify": {
"version": "1.0.4",
"resolved": "https://registry.anpm.alibaba-inc.com/reusify/-/reusify-1.0.4.tgz",
"integrity": "sha512-U9nH88a3fc/ekCF1l0/UP1IosiuIjyTh7hBvXVMHYgVcfGvt897Xguj2UOLDeI5BG2m7/uwyaLVT6fbtCwTyzw==",
"dev": true,
"license": "MIT",
"engines": {
"iojs": ">=1.0.0",
"node": ">=0.10.0"
}
},
"node_modules/rimraf": {
"version": "2.7.1",
"resolved": "https://registry.npmjs.org/rimraf/-/rimraf-2.7.1.tgz",
@@ -4399,6 +4701,30 @@
"resolved": "https://registry.npmjs.org/rrweb-cssom/-/rrweb-cssom-0.7.1.tgz",
"integrity": "sha512-TrEMa7JGdVm0UThDJSx7ddw5nVm3UJS9o9CCIZ72B1vSyEZoziDqBYP3XIoi/12lKrJR8rE3jeFHMok2F/Mnsg=="
},
"node_modules/run-parallel": {
"version": "1.2.0",
"resolved": "https://registry.anpm.alibaba-inc.com/run-parallel/-/run-parallel-1.2.0.tgz",
"integrity": "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA==",
"dev": true,
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
],
"license": "MIT",
"dependencies": {
"queue-microtask": "^1.2.2"
}
},
"node_modules/sade": {
"version": "1.8.1",
"resolved": "https://registry.npmjs.org/sade/-/sade-1.8.1.tgz",
@@ -4463,8 +4789,7 @@
"resolved": "https://registry.npmjs.org/sax/-/sax-1.2.4.tgz",
"integrity": "sha512-NqVDv9TpANUjFm0N8uM5GxL36UgKi9/atZw+x7YFnQ8ckwFGKrl4xX4yWtrey3UJm5nP1kUbnYgLopqWNSRhWw==",
"dev": true,
"optional": true,
"peer": true
"optional": true
},
"node_modules/saxes": {
"version": "6.0.0",
@@ -5267,6 +5592,57 @@
}
}
},
"node_modules/vite-plugin-commonjs": {
"version": "0.10.4",
"resolved": "https://registry.anpm.alibaba-inc.com/vite-plugin-commonjs/-/vite-plugin-commonjs-0.10.4.tgz",
"integrity": "sha512-eWQuvQKCcx0QYB5e5xfxBNjQKyrjEWZIR9UOkOV6JAgxVhtbZvCOF+FNC2ZijBJ3U3Px04ZMMyyMyFBVWIJ5+g==",
"dev": true,
"license": "MIT",
"dependencies": {
"acorn": "^8.12.1",
"magic-string": "^0.30.11",
"vite-plugin-dynamic-import": "^1.6.0"
}
},
"node_modules/vite-plugin-commonjs/node_modules/acorn": {
"version": "8.14.0",
"resolved": "https://registry.anpm.alibaba-inc.com/acorn/-/acorn-8.14.0.tgz",
"integrity": "sha512-cl669nCJTZBsL97OF4kUQm5g5hC2uihk0NxY3WENAC0TYdILVkAyHymAntgxGkl7K+t0cXIrH5siy5S4XkFycA==",
"dev": true,
"license": "MIT",
"bin": {
"acorn": "bin/acorn"
},
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/vite-plugin-dynamic-import": {
"version": "1.6.0",
"resolved": "https://registry.anpm.alibaba-inc.com/vite-plugin-dynamic-import/-/vite-plugin-dynamic-import-1.6.0.tgz",
"integrity": "sha512-TM0sz70wfzTIo9YCxVFwS8OA9lNREsh+0vMHGSkWDTZ7bgd1Yjs5RV8EgB634l/91IsXJReg0xtmuQqP0mf+rg==",
"dev": true,
"license": "MIT",
"dependencies": {
"acorn": "^8.12.1",
"es-module-lexer": "^1.5.4",
"fast-glob": "^3.3.2",
"magic-string": "^0.30.11"
}
},
"node_modules/vite-plugin-dynamic-import/node_modules/acorn": {
"version": "8.14.0",
"resolved": "https://registry.anpm.alibaba-inc.com/acorn/-/acorn-8.14.0.tgz",
"integrity": "sha512-cl669nCJTZBsL97OF4kUQm5g5hC2uihk0NxY3WENAC0TYdILVkAyHymAntgxGkl7K+t0cXIrH5siy5S4XkFycA==",
"dev": true,
"license": "MIT",
"bin": {
"acorn": "bin/acorn"
},
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/vite/node_modules/@esbuild/aix-ppc64": {
"version": "0.21.5",
"resolved": "https://registry.npmjs.org/@esbuild/aix-ppc64/-/aix-ppc64-0.21.5.tgz",

View File

@@ -23,7 +23,9 @@
},
"devDependencies": {
"@gradio/preview": "0.12.0",
"prettier": "3.3.3"
"less": "^4.2.2",
"prettier": "3.3.3",
"vite-plugin-commonjs": "^0.10.4"
},
"exports": {
"./package.json": "./package.json",

View File

@@ -10,6 +10,7 @@
export let icon: string | undefined | ComponentType = undefined;
export let icon_button_color: string = "var(--color-accent)";
export let pulse_color: string = "var(--color-accent)";
export let wave_color: string = "var(--color-accent)";
let audioContext: AudioContext;
let analyser: AnalyserNode;
@@ -19,7 +20,7 @@
$: containerWidth = icon
? "128px"
: `calc((var(--boxSize) + var(--gutter)) * ${numBars})`;
: `calc((var(--boxSize) + var(--gutter)) * ${numBars} + 80px)`;
$: if(stream_state === "open") setupAudioContext();
@@ -52,12 +53,23 @@
// Update bars
const bars = document.querySelectorAll('.gradio-webrtc-waveContainer .gradio-webrtc-box');
for (let i = 0; i < bars.length; i++) {
const barHeight = (dataArray[i] / 255) * 2;
const barHeight = (dataArray[transformIndex(i)] / 255);
bars[i].style.transform = `scaleY(${Math.max(0.1, barHeight)})`;
bars[i].style.background = wave_color;
bars[i].style.opacity = 0.5;
}
animationId = requestAnimationFrame(updateVisualization);
}
// 声波高度从两侧向中间收拢
function transformIndex(index: number): number {
const mapping = [0, 2, 4, 6, 8, 10, 12, 14, 15, 13, 11, 9, 7, 5, 3, 1];
if (index < 0 || index >= mapping.length) {
throw new Error("Index must be between 0 and 15");
}
return mapping[index];
}
</script>
<div class="gradio-webrtc-waveContainer">
@@ -78,7 +90,11 @@
</div>
{:else}
<div class="gradio-webrtc-boxContainer" style:width={containerWidth}>
{#each Array(numBars) as _}
{#each Array(numBars/2) as _}
<div class="gradio-webrtc-box"></div>
{/each}
<div class="split-container"></div>
{#each Array(numBars/2) as _}
<div class="gradio-webrtc-box"></div>
{/each}
</div>
@@ -99,10 +115,13 @@
display: flex;
justify-content: space-between;
height: 64px;
--boxSize: 8px;
--boxSize: 4px;
--gutter: 4px;
}
}
.split-container {
width: 80px;
}
.gradio-webrtc-box {
height: 100%;
width: var(--boxSize);

View File

@@ -7,6 +7,7 @@
export let icon: string | ComponentType = undefined;
export let icon_button_color: string = "var(--color-accent)";
export let pulse_color: string = "var(--color-accent)";
let audioContext: AudioContext;
let analyser: AnalyserNode;

File diff suppressed because one or more lines are too long

View File

@@ -193,7 +193,6 @@
case "connected":
stream_state = "open";
_time_limit = time_limit;
dispatch("tick");
break;
case "disconnected":
stream_state = "closed";

Binary file not shown.

After

Width:  |  Height:  |  Size: 155 KiB

View File

@@ -19,18 +19,30 @@ export async function get_video_stream(
include_audio: boolean | { deviceId: { exact: string } },
video_source: HTMLVideoElement,
device_id?: string,
track_constraints?: MediaTrackConstraints
track_constraints?:
| MediaTrackConstraints
| { video: MediaTrackConstraints; audio: MediaTrackConstraints }
): Promise<MediaStream> {
const fallback_constraints = track_constraints || {
width: { ideal: 500 },
height: { ideal: 500 },
};
console.log(track_constraints);
const video_fallback_constraints = (track_constraints as any)?.video ||
track_constraints || {
width: { ideal: 500 },
height: { ideal: 500 },
};
const audio_fallback_constraints = (track_constraints as any)?.audio ||
track_constraints || {
echoCancellation: true,
noiseSuppression: true,
autoGainControl: true,
};
const constraints = {
video: device_id
? { deviceId: { exact: device_id }, ...fallback_constraints }
: fallback_constraints,
audio: include_audio,
? { deviceId: { exact: device_id }, ...video_fallback_constraints }
: video_fallback_constraints,
audio:
typeof include_audio === "object"
? { ...include_audio, ...audio_fallback_constraints }
: include_audio,
};
return navigator.mediaDevices

View File

@@ -16,7 +16,7 @@ requires-python = ">=3.10"
authors = [{ name = "Freddy Boulton", email = "YOUREMAIL@domain.com" }]
keywords = ["gradio-custom-component", "gradio-template-Video", "streaming", "webrtc", "realtime"]
# Add dependencies here
dependencies = ["gradio>=4.0,<6.0", "aiortc"]
dependencies = ["gradio>=4.0,<6.0", "aiortc", "numpy<2.2,>=1.0"]
classifiers = [
'Development Status :: 3 - Alpha',
'Operating System :: OS Independent',

70
update_version.py Normal file
View File

@@ -0,0 +1,70 @@
import subprocess
import os
import glob
import shutil
import argparse
def check_node_installed():
try:
# 尝试运行 `node -v` 命令
subprocess.check_output(['node', '-v'])
print("Node.js is installed.")
return True
except subprocess.CalledProcessError:
print("Node.js is not installed or not found in PATH.")
return False
except FileNotFoundError:
print("Node.js is not installed or not found in PATH.")
return False
def install_wheel(whl_file, force_reinstall=True):
if force_reinstall:
subprocess.run(['pip', 'install', '--force-reinstall', whl_file])
print(f"Force reinstallation of {whl_file} successful.")
else:
subprocess.run(['pip', 'install', whl_file])
print(f"Installation of {whl_file} successful.")
def main():
# 获取当前脚本所在的目录
script_dir = os.path.dirname(os.path.abspath(__file__))
# 切换到当前脚本所在的目录
os.chdir(script_dir)
# 查找 dist 目录下的 .whl 文件
whl_files = glob.glob('dist/*.whl')
# 有node 且没有whl文件时执行gradio cc install和gradio cc build
if (whl_files is None or len(whl_files)) == 0:
if not check_node_installed():
print ("need to build .whl, but nodejs not installed")
return
# 移除 gradio_webrtc 包
subprocess.run(['pip', 'uninstall', '-y', 'gradio_webrtc'])
# 删除已有的 dist 目录
if os.path.exists('dist'):
shutil.rmtree('dist')
# 执行 gradio cc install
subprocess.run(['gradio', 'cc', 'install'])
# 执行 gradio cc build --no-generate-docs
subprocess.run(['gradio', 'cc', 'build', '--no-generate-docs'])
# 查找 dist 目录下的 .whl 文件
whl_files = glob.glob('dist/*.whl')
if whl_files:
whl_file = whl_files[0]
# 安装 .whl 文件
install_wheel(whl_file, True)
else:
print("没有找到 .whl 文件")
if __name__ == "__main__":
# 解析命令行参数
parser = argparse.ArgumentParser(description="Update and install gradio-webrtc package.")
args = parser.parse_args()
main()