mirror of
https://github.com/HumanAIGC-Engineering/gradio-webrtc.git
synced 2026-02-04 09:29:23 +08:00
Merge branch 'freddyaboulton:main' into audio-track-constraints
This commit is contained in:
26
.github/workflows/docs.yml
vendored
26
.github/workflows/docs.yml
vendored
@@ -3,8 +3,16 @@ on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
|
||||
permissions:
|
||||
contents: write
|
||||
pull-requests: write
|
||||
deployments: write
|
||||
pages: write
|
||||
|
||||
jobs:
|
||||
deploy:
|
||||
runs-on: ubuntu-latest
|
||||
@@ -24,5 +32,19 @@ jobs:
|
||||
path: .cache
|
||||
restore-keys: |
|
||||
mkdocs-material-
|
||||
- run: pip install mkdocs-material
|
||||
- run: mkdocs gh-deploy --force
|
||||
- run: pip install mkdocs-material
|
||||
- name: Build docs
|
||||
run: mkdocs build
|
||||
|
||||
- name: Deploy to GH Pages (main)
|
||||
if: github.event_name == 'push'
|
||||
run: mkdocs gh-deploy --force
|
||||
|
||||
- name: Deploy PR Preview
|
||||
if: github.event_name == 'pull_request'
|
||||
uses: rossjrw/pr-preview-action@v1
|
||||
with:
|
||||
source-dir: ./site
|
||||
preview-branch: gh-pages
|
||||
umbrella-dir: pr-preview
|
||||
action: auto
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -9,9 +9,12 @@ __tmp/*
|
||||
.mypycache
|
||||
.ruff_cache
|
||||
node_modules
|
||||
backend/**/templates/
|
||||
demo/MobileNetSSD_deploy.caffemodel
|
||||
demo/MobileNetSSD_deploy.prototxt.txt
|
||||
demo/scratch
|
||||
.gradio
|
||||
.vscode
|
||||
.DS_Store
|
||||
test/
|
||||
.venv*
|
||||
.env
|
||||
612
README.md
612
README.md
@@ -1,57 +1,130 @@
|
||||
<h1 style='text-align: center; margin-bottom: 1rem'> Gradio WebRTC ⚡️ </h1>
|
||||
<div style='text-align: center; margin-bottom: 1rem; display: flex; justify-content: center; align-items: center;'>
|
||||
<h1 style='color: white; margin: 0;'>FastRTC</h1>
|
||||
<img src='https://huggingface.co/datasets/freddyaboulton/bucket/resolve/main/fastrtc_logo_small.png'
|
||||
alt="FastRTC Logo"
|
||||
style="margin-right: 10px;">
|
||||
</div>
|
||||
|
||||
<div style="display: flex; flex-direction: row; justify-content: center">
|
||||
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/gradio_webrtc">
|
||||
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" style="display: block; padding-right: 5px; height: 20px;" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
<a href="https://freddyaboulton.github.io/gradio-webrtc/" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/Docs-ffcf40"></a>
|
||||
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/fastrtc">
|
||||
<a href="https://github.com/freddyaboulton/fastrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
</div>
|
||||
|
||||
<h3 style='text-align: center'>
|
||||
Stream video and audio in real time with Gradio using WebRTC.
|
||||
The Real-Time Communication Library for Python.
|
||||
</h3>
|
||||
|
||||
Turn any python function into a real-time audio and video stream over WebRTC or WebSockets.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc
|
||||
pip install fastrtc
|
||||
```
|
||||
|
||||
to use built-in pause detection (see [ReplyOnPause](https://freddyaboulton.github.io/gradio-webrtc//user-guide/#reply-on-pause)), install the `vad` extra:
|
||||
to use built-in pause detection (see [ReplyOnPause](https://fastrtc.org/userguide/audio/#reply-on-pause)), and text to speech (see [Text To Speech](https://fastrtc.org/userguide/audio/#text-to-speech)), install the `vad` and `tts` extras:
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc[vad]
|
||||
pip install "fastrtc[vad, tts]"
|
||||
```
|
||||
|
||||
For stop word detection (see [ReplyOnStopWords](https://freddyaboulton.github.io/gradio-webrtc//user-guide/#reply-on-stopwords)), install the `stopword` extra:
|
||||
## Key Features
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc[stopword]
|
||||
```
|
||||
- 🗣️ Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user.
|
||||
- 💻 Automatic UI - Use the `.ui.launch()` method to launch the webRTC-enabled built-in Gradio UI.
|
||||
- 🔌 Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!
|
||||
- ⚡️ Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!
|
||||
- 📞 Automatic Telephone Support - Use the `fastphone()` method of the stream to launch the application and get a free temporary phone number!
|
||||
- 🤖 Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example on how to serve a custom JS frontend.
|
||||
|
||||
## Docs
|
||||
|
||||
https://freddyaboulton.github.io/gradio-webrtc/
|
||||
[https://fastrtc.org](https://fastrtc.org)
|
||||
|
||||
## Examples
|
||||
See the [Cookbook](https://fastrtc.org/cookbook/) for examples of how to use the library.
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Audio Input/Output with mini-omni2</h3>
|
||||
<p>Build a GPT-4o like experience with mini-omni2, an audio-native LLM.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/58c06523-fc38-4f5f-a4ba-a02a28e7fa9e" controls></video>
|
||||
<h3>🗣️👀 Gemini Audio Video Chat</h3>
|
||||
<p>Stream BOTH your webcam video and audio feeds to Google Gemini. You can also upload images to augment your conversation!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/9636dc97-4fee-46bb-abb8-b92e69c08c71" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/mini-omni2-webrtc/blob/main/app.py">Code</a>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Google Gemini Real Time Voice API</h3>
|
||||
<p>Talk to Gemini in real time using Google's voice API.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/ea6d18cb-8589-422b-9bba-56332d9f61de" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/fastrtc/talk-to-gemini">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/fastrtc/talk-to-gemini/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🗣️ OpenAI Real Time Voice API</h3>
|
||||
<p>Talk to ChatGPT in real time using OpenAI's voice API.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/178bdadc-f17b-461a-8d26-e915c632ff80" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/fastrtc/talk-to-openai">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/fastrtc/talk-to-openai/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🤖 Hello Computer</h3>
|
||||
<p>Say computer before asking your question!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/afb2a3ef-c1ab-4cfb-872d-578f895a10d5" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/fastrtc/hello-computer">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/fastrtc/hello-computer/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🤖 Llama Code Editor</h3>
|
||||
<p>Create and edit HTML pages with just your voice! Powered by SambaNova systems.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/98523cf3-dac8-4127-9649-d91a997e3ef5" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/fastrtc/llama-code-editor">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/fastrtc/llama-code-editor/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Talk to Claude</h3>
|
||||
<p>Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/650bc492-798e-4995-8cef-159e1cfc2185" controls></video>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/fb6ef07f-3ccd-444a-997b-9bc9bdc035d3" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-claude">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-claude/blob/main/app.py">Code</a>
|
||||
<a href="https://huggingface.co/spaces/fastrtc/talk-to-claude">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/fastrtc/talk-to-claude/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🎵 Whisper Transcription</h3>
|
||||
<p>Have whisper transcribe your speech in real time!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/87603053-acdc-4c8a-810f-f618c49caafb" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/fastrtc/whisper-realtime">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/fastrtc/whisper-realtime/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>📷 Yolov10 Object Detection</h3>
|
||||
<p>Run the Yolov10 model on a user webcam stream in real time!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/f82feb74-a071-4e81-9110-a01989447ceb" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/fastrtc/object-detection">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/fastrtc/object-detection/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
@@ -76,366 +149,169 @@ https://freddyaboulton.github.io/gradio-webrtc/
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🤖 Llama Code Editor</h3>
|
||||
<p>Create and edit HTML pages with just your voice! Powered by SambaNova systems.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/a09647f1-33e1-4154-a5a3-ffefda8a736a" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/llama-code-editor">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/llama-code-editor/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Talk to Ultravox</h3>
|
||||
<p>Talk to Fixie.AI's audio-native Ultravox LLM with the transformers library.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/e6e62482-518c-4021-9047-9da14cd82be1" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-ultravox/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🗣️ Talk to Llama 3.2 3b</h3>
|
||||
<p>Use the Lepton API to make Llama 3.2 talk back to you!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/3ee37a6b-0892-45f5-b801-73188fdfad9a" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/llama-3.2-3b-voice-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>🤖 Talk to Qwen2-Audio</h3>
|
||||
<p>Qwen2-Audio is a SOTA audio-to-text LLM developed by Alibaba.</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/c821ad86-44cc-4d0c-8dc4-8c02ad1e5dc8" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-qwen-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>📷 Yolov10 Object Detection</h3>
|
||||
<p>Run the Yolov10 model on a user webcam stream in real time!</p>
|
||||
<video width="100%" src="https://github.com/user-attachments/assets/c90d8c9d-d2d5-462e-9e9b-af969f2ea73c" controls></video>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
<h3>📷 Video Object Detection with RT-DETR</h3>
|
||||
<p>Upload a video and stream out frames with detected objects (powered by RT-DETR) model.</p>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
</tr>
|
||||
|
||||
<tr>
|
||||
<td width="50%">
|
||||
<h3>🔊 Text-to-Speech with Parler</h3>
|
||||
<p>Stream out audio generated by Parler TTS!</p>
|
||||
<p>
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc">Demo</a> |
|
||||
<a href="https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc/blob/main/app.py">Code</a>
|
||||
</p>
|
||||
</td>
|
||||
<td width="50%">
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## Usage
|
||||
|
||||
This is an shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
|
||||
|
||||
To get started with WebRTC streams, all that's needed is to import the `WebRTC` component from this package and implement its `stream` event.
|
||||
|
||||
### Reply on Pause
|
||||
|
||||
Typically, you want to run an AI model that generates audio when the user has stopped speaking. This can be done by wrapping a python generator with the `ReplyOnPause` class
|
||||
and passing it to the `stream` event of the `WebRTC` component.
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC, ReplyOnPause
|
||||
|
||||
def response(audio: tuple[int, np.ndarray]): # (1)
|
||||
"""This function must yield audio frames"""
|
||||
...
|
||||
for numpy_array in generated_audio:
|
||||
yield (sampling_rate, numpy_array, "mono") # (2)
|
||||
- `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/).
|
||||
- `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.
|
||||
- `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system.
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Chat (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
mode="send-receive", # (3)
|
||||
modality="audio",
|
||||
)
|
||||
audio.stream(fn=ReplyOnPause(response),
|
||||
inputs=[audio], outputs=[audio], # (4)
|
||||
time_limit=60) # (5)
|
||||
## Quickstart
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The python generator will receive the **entire** audio up until the user stopped. It will be a tuple of the form (sampling_rate, numpy array of audio). The array will have a shape of (1, num_samples). You can also pass in additional input components.
|
||||
|
||||
2. The generator must yield audio chunks as a tuple of (sampling_rate, numpy audio array). Each numpy audio array must have a shape of (1, num_samples).
|
||||
|
||||
3. The `mode` and `modality` arguments must be set to `"send-receive"` and `"audio"`.
|
||||
|
||||
4. The `WebRTC` component must be the first input and output component.
|
||||
|
||||
5. Set a `time_limit` to control how long a conversation will last. If the `concurrency_count` is 1 (default), only one conversation will be handled at a time.
|
||||
|
||||
|
||||
### Reply On Stopwords
|
||||
|
||||
You can configure your AI model to run whenever a set of "stop words" are detected, like "Hey Siri" or "computer", with the `ReplyOnStopWords` class.
|
||||
|
||||
The API is similar to `ReplyOnPause` with the addition of a `stop_words` parameter.
|
||||
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC, ReplyOnPause
|
||||
|
||||
def response(audio: tuple[int, np.ndarray]):
|
||||
"""This function must yield audio frames"""
|
||||
...
|
||||
for numpy_array in generated_audio:
|
||||
yield (sampling_rate, numpy_array, "mono")
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Chat (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
mode="send",
|
||||
modality="audio",
|
||||
)
|
||||
webrtc.stream(ReplyOnStopWords(generate,
|
||||
input_sample_rate=16000,
|
||||
stop_words=["computer"]), # (1)
|
||||
inputs=[webrtc, history, code],
|
||||
outputs=[webrtc], time_limit=90,
|
||||
concurrency_limit=10)
|
||||
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
|
||||
|
||||
|
||||
### Audio Server-To-Clien
|
||||
|
||||
To stream only from the server to the client, implement a python generator and pass it to the component's `stream` event. The stream event must also specify a `trigger` corresponding to a UI interaction that starts the stream. In this case, it's a button click.
|
||||
|
||||
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
from pydub import AudioSegment
|
||||
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file("audio_file.wav")
|
||||
array = np.array(segment.get_array_of_samples()).reshape(1, -1)
|
||||
yield (segment.frame_rate, array)
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
audio = WebRTC(label="Stream", mode="receive", # (1)
|
||||
modality="audio")
|
||||
num_steps = gr.Slider(label="Number of Steps", minimum=1,
|
||||
maximum=10, step=1, value=5)
|
||||
button = gr.Button("Generate")
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio],
|
||||
trigger=button.click # (2)
|
||||
)
|
||||
```
|
||||
|
||||
1. Set `mode="receive"` to only receive audio from the server.
|
||||
2. The `stream` event must take a `trigger` that corresponds to the gradio event that starts the stream. In this case, it's the button click.
|
||||
|
||||
|
||||
### Video Input/Output Streaming
|
||||
Set up a video Input/Output stream to continuosly receive webcam frames from the user and run an arbitrary python function to return a modified frame.
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
|
||||
|
||||
def detection(image, conf_threshold=0.3): # (1)
|
||||
... your detection code here ...
|
||||
return modified_frame # (2)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
image = WebRTC(label="Stream", mode="send-receive", modality="video") # (3)
|
||||
conf_threshold = gr.Slider(
|
||||
label="Confidence Threshold",
|
||||
minimum=0.0,
|
||||
maximum=1.0,
|
||||
step=0.05,
|
||||
value=0.30,
|
||||
)
|
||||
image.stream(
|
||||
fn=detection,
|
||||
inputs=[image, conf_threshold], # (4)
|
||||
outputs=[image], time_limit=10
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
|
||||
2. The function must return a numpy array. It can take arbitrary values from other components.
|
||||
3. Set the `modality="video"` and `mode="send-receive"`
|
||||
4. The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
|
||||
|
||||
### Server-to-Client Only
|
||||
|
||||
Set up a server-to-client stream to stream video from an arbitrary user interaction.
|
||||
|
||||
```py
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
import cv2
|
||||
|
||||
def generation():
|
||||
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
|
||||
cap = cv2.VideoCapture(url)
|
||||
iterating = True
|
||||
while iterating:
|
||||
iterating, frame = cap.read()
|
||||
yield frame # (1)
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
output_video = WebRTC(label="Video Stream", mode="receive", # (2)
|
||||
modality="video")
|
||||
button = gr.Button("Start", variant="primary")
|
||||
output_video.stream(
|
||||
fn=generation, inputs=None, outputs=[output_video],
|
||||
trigger=button.click # (3)
|
||||
)
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
|
||||
2. Set `mode="receive"` to only receive audio from the server.
|
||||
3. The `trigger` parameter the gradio event that will trigger the stream. In this case, the button click event.
|
||||
|
||||
|
||||
### Additional Outputs
|
||||
|
||||
In order to modify other components from within the WebRTC stream, you must yield an instance of `AdditionalOutputs` and add an `on_additional_outputs` event to the `WebRTC` component.
|
||||
|
||||
This is common for displaying a multimodal text/audio conversation in a Chatbot UI.
|
||||
|
||||
|
||||
|
||||
``` py title="Additional Outputs"
|
||||
from gradio_webrtc import AdditionalOutputs, WebRTC
|
||||
|
||||
def transcribe(audio: tuple[int, np.ndarray],
|
||||
transformers_convo: list[dict],
|
||||
gradio_convo: list[dict]):
|
||||
response = model.generate(**inputs, max_length=256)
|
||||
transformers_convo.append({"role": "assistant", "content": response})
|
||||
gradio_convo.append({"role": "assistant", "content": response})
|
||||
yield AdditionalOutputs(transformers_convo, gradio_convo) # (1)
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Talk to Qwen2Audio (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
transformers_convo = gr.State(value=[])
|
||||
with gr.Row():
|
||||
with gr.Column():
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
mode="send", # (2)
|
||||
modality="audio",
|
||||
)
|
||||
with gr.Column():
|
||||
transcript = gr.Chatbot(label="transcript", type="messages")
|
||||
|
||||
audio.stream(ReplyOnPause(transcribe),
|
||||
inputs=[audio, transformers_convo, transcript],
|
||||
outputs=[audio], time_limit=90)
|
||||
audio.on_additional_outputs(lambda s,a: (s,a), # (3)
|
||||
outputs=[transformers_convo, transcript],
|
||||
queue=False, show_progress="hidden")
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
1. Pass your data to `AdditionalOutputs` and yield it.
|
||||
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
|
||||
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
|
||||
=== "Notes"
|
||||
1. Pass your data to `AdditionalOutputs` and yield it.
|
||||
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
|
||||
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
|
||||
|
||||
|
||||
## Deployment
|
||||
|
||||
When deploying in a cloud environment (like Hugging Face Spaces, EC2, etc), you need to set up a TURN server to relay the WebRTC traffic.
|
||||
The easiest way to do this is to use a service like Twilio.
|
||||
### Echo Audio
|
||||
|
||||
```python
|
||||
from twilio.rest import Client
|
||||
import os
|
||||
from fastrtc import Stream, ReplyOnPause
|
||||
import numpy as np
|
||||
|
||||
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
|
||||
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
|
||||
def echo(audio: tuple[int, np.ndarray]):
|
||||
# The function will be passed the audio until the user pauses
|
||||
# Implement any iterator that yields audio
|
||||
# See "LLM Voice Chat" for a more complete example
|
||||
yield audio
|
||||
|
||||
client = Client(account_sid, auth_token)
|
||||
stream = Stream(
|
||||
handler=ReplyOnPause(echo),
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
)
|
||||
```
|
||||
|
||||
token = client.tokens.create()
|
||||
### LLM Voice Chat
|
||||
|
||||
rtc_configuration = {
|
||||
"iceServers": token.ice_servers,
|
||||
"iceTransportPolicy": "relay",
|
||||
}
|
||||
```py
|
||||
from fastrtc import (
|
||||
ReplyOnPause, AdditionalOutputs, Stream,
|
||||
audio_to_bytes, aggregate_bytes_to_16bit
|
||||
)
|
||||
import gradio as gr
|
||||
from groq import Groq
|
||||
import anthropic
|
||||
from elevenlabs import ElevenLabs
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
...
|
||||
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
|
||||
...
|
||||
```
|
||||
groq_client = Groq()
|
||||
claude_client = anthropic.Anthropic()
|
||||
tts_client = ElevenLabs()
|
||||
|
||||
|
||||
# See "Talk to Claude" in Cookbook for an example of how to keep
|
||||
# track of the chat history.
|
||||
def response(
|
||||
audio: tuple[int, np.ndarray],
|
||||
):
|
||||
prompt = groq_client.audio.transcriptions.create(
|
||||
file=("audio-file.mp3", audio_to_bytes(audio)),
|
||||
model="whisper-large-v3-turbo",
|
||||
response_format="verbose_json",
|
||||
).text
|
||||
response = claude_client.messages.create(
|
||||
model="claude-3-5-haiku-20241022",
|
||||
max_tokens=512,
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
)
|
||||
response_text = " ".join(
|
||||
block.text
|
||||
for block in response.content
|
||||
if getattr(block, "type", None) == "text"
|
||||
)
|
||||
iterator = tts_client.text_to_speech.convert_as_stream(
|
||||
text=response_text,
|
||||
voice_id="JBFqnCBsd6RMkjVDRZzb",
|
||||
model_id="eleven_multilingual_v2",
|
||||
output_format="pcm_24000"
|
||||
|
||||
)
|
||||
for chunk in aggregate_bytes_to_16bit(iterator):
|
||||
audio_array = np.frombuffer(chunk, dtype=np.int16).reshape(1, -1)
|
||||
yield (24000, audio_array)
|
||||
|
||||
stream = Stream(
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
handler=ReplyOnPause(response),
|
||||
)
|
||||
```
|
||||
|
||||
### Webcam Stream
|
||||
|
||||
```python
|
||||
from fastrtc import Stream
|
||||
import numpy as np
|
||||
|
||||
|
||||
def flip_vertically(image):
|
||||
return np.flip(image, axis=0)
|
||||
|
||||
|
||||
stream = Stream(
|
||||
handler=flip_vertically,
|
||||
modality="video",
|
||||
mode="send-receive",
|
||||
)
|
||||
```
|
||||
|
||||
### Object Detection
|
||||
|
||||
```python
|
||||
from fastrtc import Stream
|
||||
import gradio as gr
|
||||
import cv2
|
||||
from huggingface_hub import hf_hub_download
|
||||
from .inference import YOLOv10
|
||||
|
||||
model_file = hf_hub_download(
|
||||
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
|
||||
)
|
||||
|
||||
# git clone https://huggingface.co/spaces/fastrtc/object-detection
|
||||
# for YOLOv10 implementation
|
||||
model = YOLOv10(model_file)
|
||||
|
||||
def detection(image, conf_threshold=0.3):
|
||||
image = cv2.resize(image, (model.input_width, model.input_height))
|
||||
new_image = model.detect_objects(image, conf_threshold)
|
||||
return cv2.resize(new_image, (500, 500))
|
||||
|
||||
stream = Stream(
|
||||
handler=detection,
|
||||
modality="video",
|
||||
mode="send-receive",
|
||||
additional_inputs=[
|
||||
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
## Running the Stream
|
||||
|
||||
Run:
|
||||
|
||||
### Gradio
|
||||
|
||||
```py
|
||||
stream.ui.launch()
|
||||
```
|
||||
|
||||
### Telephone (Audio Only)
|
||||
|
||||
```py
|
||||
stream.fastphone()
|
||||
```
|
||||
|
||||
### FastAPI
|
||||
|
||||
```py
|
||||
app = FastAPI()
|
||||
stream.mount(app)
|
||||
|
||||
# Optional: Add routes
|
||||
@app.get("/")
|
||||
async def _():
|
||||
return HTMLResponse(content=open("index.html").read())
|
||||
|
||||
# uvicorn app:app --host 0.0.0.0 --port 8000
|
||||
```
|
||||
|
||||
@@ -3,11 +3,29 @@ from .credentials import (
|
||||
get_turn_credentials,
|
||||
get_twilio_turn_credentials,
|
||||
)
|
||||
from .reply_on_pause import AlgoOptions, ReplyOnPause, SileroVadOptions
|
||||
from .pause_detection import (
|
||||
ModelOptions,
|
||||
PauseDetectionModel,
|
||||
SileroVadOptions,
|
||||
get_silero_model,
|
||||
)
|
||||
from .reply_on_pause import AlgoOptions, ReplyOnPause
|
||||
from .reply_on_stopwords import ReplyOnStopWords
|
||||
from .speech_to_text import stt, stt_for_chunks
|
||||
from .speech_to_text import MoonshineSTT, get_stt_model
|
||||
from .stream import Stream, UIArgs
|
||||
from .text_to_speech import KokoroTTSOptions, get_tts_model
|
||||
from .tracks import (
|
||||
AsyncAudioVideoStreamHandler,
|
||||
AsyncStreamHandler,
|
||||
AudioEmitType,
|
||||
AudioVideoStreamHandler,
|
||||
StreamHandler,
|
||||
VideoEmitType,
|
||||
VideoStreamHandler,
|
||||
)
|
||||
from .utils import (
|
||||
AdditionalOutputs,
|
||||
CloseStream,
|
||||
Warning,
|
||||
WebRTCError,
|
||||
aggregate_bytes_to_16bit,
|
||||
@@ -15,15 +33,12 @@ from .utils import (
|
||||
audio_to_bytes,
|
||||
audio_to_file,
|
||||
audio_to_float32,
|
||||
audio_to_int16,
|
||||
get_current_context,
|
||||
wait_for_item,
|
||||
)
|
||||
from .webrtc import (
|
||||
AsyncAudioVideoStreamHandler,
|
||||
AsyncStreamHandler,
|
||||
AudioVideoStreamHandler,
|
||||
StreamHandler,
|
||||
WebRTC,
|
||||
VideoEmitType,
|
||||
AudioEmitType,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
@@ -38,17 +53,30 @@ __all__ = [
|
||||
"audio_to_bytes",
|
||||
"audio_to_file",
|
||||
"audio_to_float32",
|
||||
"audio_to_int16",
|
||||
"get_hf_turn_credentials",
|
||||
"get_twilio_turn_credentials",
|
||||
"get_turn_credentials",
|
||||
"ReplyOnPause",
|
||||
"ReplyOnStopWords",
|
||||
"SileroVadOptions",
|
||||
"stt",
|
||||
"stt_for_chunks",
|
||||
"get_stt_model",
|
||||
"MoonshineSTT",
|
||||
"StreamHandler",
|
||||
"Stream",
|
||||
"VideoEmitType",
|
||||
"WebRTC",
|
||||
"WebRTCError",
|
||||
"Warning",
|
||||
"get_tts_model",
|
||||
"KokoroTTSOptions",
|
||||
"wait_for_item",
|
||||
"UIArgs",
|
||||
"ModelOptions",
|
||||
"PauseDetectionModel",
|
||||
"get_silero_model",
|
||||
"SileroVadOptions",
|
||||
"VideoStreamHandler",
|
||||
"CloseStream",
|
||||
"get_current_context",
|
||||
]
|
||||
@@ -8,7 +8,7 @@ def get_hf_turn_credentials(token=None):
|
||||
if token is None:
|
||||
token = os.getenv("HF_TOKEN")
|
||||
credentials = requests.get(
|
||||
"https://freddyaboulton-turn-server-login.hf.space/credentials",
|
||||
"https://fastrtc-turn-server-login.hf.space/credentials",
|
||||
headers={"X-HF-Access-Token": token},
|
||||
)
|
||||
if not credentials.status_code == 200:
|
||||
10
backend/fastrtc/pause_detection/__init__.py
Normal file
10
backend/fastrtc/pause_detection/__init__.py
Normal file
@@ -0,0 +1,10 @@
|
||||
from .protocol import ModelOptions, PauseDetectionModel
|
||||
from .silero import SileroVADModel, SileroVadOptions, get_silero_model
|
||||
|
||||
__all__ = [
|
||||
"SileroVADModel",
|
||||
"SileroVadOptions",
|
||||
"PauseDetectionModel",
|
||||
"ModelOptions",
|
||||
"get_silero_model",
|
||||
]
|
||||
20
backend/fastrtc/pause_detection/protocol.py
Normal file
20
backend/fastrtc/pause_detection/protocol.py
Normal file
@@ -0,0 +1,20 @@
|
||||
from typing import Any, Protocol, TypeAlias
|
||||
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from ..utils import AudioChunk
|
||||
|
||||
ModelOptions: TypeAlias = Any
|
||||
|
||||
|
||||
class PauseDetectionModel(Protocol):
|
||||
def vad(
|
||||
self,
|
||||
audio: tuple[int, NDArray[np.int16] | NDArray[np.float32]],
|
||||
options: ModelOptions,
|
||||
) -> tuple[float, list[AudioChunk]]: ...
|
||||
|
||||
def warmup(
|
||||
self,
|
||||
) -> None: ...
|
||||
@@ -1,13 +1,16 @@
|
||||
import logging
|
||||
import warnings
|
||||
from dataclasses import dataclass
|
||||
from typing import List, Literal, overload
|
||||
from functools import lru_cache
|
||||
from typing import List
|
||||
|
||||
import click
|
||||
import numpy as np
|
||||
from huggingface_hub import hf_hub_download
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from ..utils import AudioChunk
|
||||
from .protocol import PauseDetectionModel
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -15,6 +18,26 @@ logger = logging.getLogger(__name__)
|
||||
# The code below is adapted from https://github.com/gpt-omni/mini-omni/blob/main/utils/vad.py
|
||||
|
||||
|
||||
@lru_cache
|
||||
def get_silero_model() -> PauseDetectionModel:
|
||||
"""Returns the VAD model instance and warms it up with dummy data."""
|
||||
# Warm up the model with dummy data
|
||||
|
||||
try:
|
||||
import importlib.util
|
||||
|
||||
mod = importlib.util.find_spec("onnxruntime")
|
||||
if mod is None:
|
||||
raise RuntimeError("Install fastrtc[vad] to use ReplyOnPause")
|
||||
except (ValueError, ModuleNotFoundError):
|
||||
raise RuntimeError("Install fastrtc[vad] to use ReplyOnPause")
|
||||
model = SileroVADModel()
|
||||
print(click.style("INFO", fg="green") + ":\t Warming up VAD model.")
|
||||
model.warmup()
|
||||
print(click.style("INFO", fg="green") + ":\t VAD model warmed up.")
|
||||
return model
|
||||
|
||||
|
||||
@dataclass
|
||||
class SileroVadOptions:
|
||||
"""VAD options.
|
||||
@@ -239,33 +262,21 @@ class SileroVADModel:
|
||||
|
||||
return speeches
|
||||
|
||||
@overload
|
||||
def vad(
|
||||
self,
|
||||
audio_tuple: tuple[int, NDArray],
|
||||
vad_parameters: None | SileroVadOptions,
|
||||
return_chunks: Literal[True],
|
||||
) -> tuple[float, List[AudioChunk]]: ...
|
||||
|
||||
@overload
|
||||
def vad(
|
||||
self,
|
||||
audio_tuple: tuple[int, NDArray],
|
||||
vad_parameters: None | SileroVadOptions,
|
||||
return_chunks: bool = False,
|
||||
) -> float: ...
|
||||
def warmup(self):
|
||||
for _ in range(10):
|
||||
dummy_audio = np.zeros(102400, dtype=np.float32)
|
||||
self.vad((24000, dummy_audio), None)
|
||||
|
||||
def vad(
|
||||
self,
|
||||
audio_tuple: tuple[int, NDArray],
|
||||
vad_parameters: None | SileroVadOptions,
|
||||
return_chunks: bool = False,
|
||||
) -> float | tuple[float, List[AudioChunk]]:
|
||||
sampling_rate, audio = audio_tuple
|
||||
logger.debug("VAD audio shape input: %s", audio.shape)
|
||||
audio: tuple[int, NDArray[np.float32] | NDArray[np.int16]],
|
||||
options: None | SileroVadOptions,
|
||||
) -> tuple[float, list[AudioChunk]]:
|
||||
sampling_rate, audio_ = audio
|
||||
logger.debug("VAD audio shape input: %s", audio_.shape)
|
||||
try:
|
||||
if audio.dtype != np.float32:
|
||||
audio = audio.astype(np.float32) / 32768.0
|
||||
if audio_.dtype != np.float32:
|
||||
audio_ = audio_.astype(np.float32) / 32768.0
|
||||
sr = 16000
|
||||
if sr != sampling_rate:
|
||||
try:
|
||||
@@ -274,18 +285,16 @@ class SileroVADModel:
|
||||
raise RuntimeError(
|
||||
"Applying the VAD filter requires the librosa if the input sampling rate is not 16000hz"
|
||||
) from e
|
||||
audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=sr)
|
||||
audio_ = librosa.resample(audio_, orig_sr=sampling_rate, target_sr=sr)
|
||||
|
||||
if not vad_parameters:
|
||||
vad_parameters = SileroVadOptions()
|
||||
speech_chunks = self.get_speech_timestamps(audio, vad_parameters)
|
||||
if not options:
|
||||
options = SileroVadOptions()
|
||||
speech_chunks = self.get_speech_timestamps(audio_, options)
|
||||
logger.debug("VAD speech chunks: %s", speech_chunks)
|
||||
audio = self.collect_chunks(audio, speech_chunks)
|
||||
logger.debug("VAD audio shape: %s", audio.shape)
|
||||
duration_after_vad = audio.shape[0] / sr
|
||||
if return_chunks:
|
||||
return duration_after_vad, speech_chunks
|
||||
return duration_after_vad
|
||||
audio_ = self.collect_chunks(audio_, speech_chunks)
|
||||
logger.debug("VAD audio shape: %s", audio_.shape)
|
||||
duration_after_vad = audio_.shape[0] / sr
|
||||
return duration_after_vad, speech_chunks
|
||||
except Exception as e:
|
||||
import math
|
||||
import traceback
|
||||
@@ -293,7 +302,7 @@ class SileroVADModel:
|
||||
logger.debug("VAD Exception: %s", str(e))
|
||||
exec = traceback.format_exc()
|
||||
logger.debug("traceback %s", exec)
|
||||
return math.inf
|
||||
return math.inf, []
|
||||
|
||||
def __call__(self, x, state, sr: int):
|
||||
if len(x.shape) == 1:
|
||||
1
backend/fastrtc/py.typed
Normal file
1
backend/fastrtc/py.typed
Normal file
@@ -0,0 +1 @@
|
||||
|
||||
@@ -1,26 +1,19 @@
|
||||
import asyncio
|
||||
import inspect
|
||||
from dataclasses import dataclass
|
||||
from functools import lru_cache
|
||||
from dataclasses import dataclass, field
|
||||
from logging import getLogger
|
||||
from threading import Event
|
||||
from typing import Any, Callable, Generator, Literal, Union, cast
|
||||
from typing import Any, AsyncGenerator, Callable, Generator, Literal, cast
|
||||
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from gradio_webrtc.pause_detection import SileroVADModel, SileroVadOptions
|
||||
from gradio_webrtc.webrtc import EmitType, StreamHandler
|
||||
from .pause_detection import ModelOptions, PauseDetectionModel, get_silero_model
|
||||
from .tracks import EmitType, StreamHandler
|
||||
from .utils import create_message, split_output
|
||||
|
||||
logger = getLogger(__name__)
|
||||
|
||||
counter = 0
|
||||
|
||||
|
||||
@lru_cache
|
||||
def get_vad_model() -> SileroVADModel:
|
||||
"""Returns the VAD model instance."""
|
||||
return SileroVADModel()
|
||||
|
||||
|
||||
@dataclass
|
||||
class AlgoOptions:
|
||||
@@ -40,19 +33,31 @@ class AppState:
|
||||
responding: bool = False
|
||||
stopped: bool = False
|
||||
buffer: np.ndarray | None = None
|
||||
responded_audio: bool = False
|
||||
interrupted: asyncio.Event = field(default_factory=asyncio.Event)
|
||||
|
||||
def new(self):
|
||||
return AppState()
|
||||
|
||||
|
||||
ReplyFnGenerator = Union[
|
||||
# For two arguments
|
||||
ReplyFnGenerator = (
|
||||
Callable[
|
||||
[tuple[int, np.ndarray], list[dict[Any, Any]]],
|
||||
[tuple[int, NDArray[np.int16]], Any],
|
||||
Generator[EmitType, None, None],
|
||||
],
|
||||
Callable[
|
||||
[tuple[int, np.ndarray]],
|
||||
]
|
||||
| Callable[
|
||||
[tuple[int, NDArray[np.int16]]],
|
||||
Generator[EmitType, None, None],
|
||||
],
|
||||
]
|
||||
]
|
||||
| Callable[
|
||||
[tuple[int, NDArray[np.int16]]],
|
||||
AsyncGenerator[EmitType, None],
|
||||
]
|
||||
| Callable[
|
||||
[tuple[int, NDArray[np.int16]], Any],
|
||||
AsyncGenerator[EmitType, None],
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
async def iterate(generator: Generator) -> Any:
|
||||
@@ -63,12 +68,15 @@ class ReplyOnPause(StreamHandler):
|
||||
def __init__(
|
||||
self,
|
||||
fn: ReplyFnGenerator,
|
||||
startup_fn: Callable | None = None,
|
||||
algo_options: AlgoOptions | None = None,
|
||||
model_options: SileroVadOptions | None = None,
|
||||
model_options: ModelOptions | None = None,
|
||||
can_interrupt: bool = True,
|
||||
expected_layout: Literal["mono", "stereo"] = "mono",
|
||||
output_sample_rate: int = 24000,
|
||||
output_frame_size: int = 480,
|
||||
output_frame_size: int | None = None, # Deprecated
|
||||
input_sample_rate: int = 48000,
|
||||
model: PauseDetectionModel | None = None,
|
||||
):
|
||||
super().__init__(
|
||||
expected_layout,
|
||||
@@ -76,31 +84,46 @@ class ReplyOnPause(StreamHandler):
|
||||
output_frame_size,
|
||||
input_sample_rate=input_sample_rate,
|
||||
)
|
||||
self.can_interrupt = can_interrupt
|
||||
self.expected_layout: Literal["mono", "stereo"] = expected_layout
|
||||
self.output_sample_rate = output_sample_rate
|
||||
self.output_frame_size = output_frame_size
|
||||
self.model = get_vad_model()
|
||||
self.model = model or get_silero_model()
|
||||
self.fn = fn
|
||||
self.is_async = inspect.isasyncgenfunction(fn)
|
||||
self.event = Event()
|
||||
self.state = AppState()
|
||||
self.generator: Generator[EmitType, None, None] | None = None
|
||||
self.generator: (
|
||||
Generator[EmitType, None, None] | AsyncGenerator[EmitType, None] | None
|
||||
) = None
|
||||
self.model_options = model_options
|
||||
self.algo_options = algo_options or AlgoOptions()
|
||||
self.startup_fn = startup_fn
|
||||
|
||||
@property
|
||||
def _needs_additional_inputs(self) -> bool:
|
||||
return len(inspect.signature(self.fn).parameters) > 1
|
||||
|
||||
def start_up(self):
|
||||
if self.startup_fn:
|
||||
if self._needs_additional_inputs:
|
||||
self.wait_for_args_sync()
|
||||
args = self.latest_args[1:]
|
||||
else:
|
||||
args = ()
|
||||
self.generator = self.startup_fn(*args)
|
||||
self.event.set()
|
||||
|
||||
def copy(self):
|
||||
return ReplyOnPause(
|
||||
self.fn,
|
||||
self.startup_fn,
|
||||
self.algo_options,
|
||||
self.model_options,
|
||||
self.can_interrupt,
|
||||
self.expected_layout,
|
||||
self.output_sample_rate,
|
||||
self.output_frame_size,
|
||||
self.input_sample_rate,
|
||||
self.model,
|
||||
)
|
||||
|
||||
def determine_pause(
|
||||
@@ -110,7 +133,7 @@ class ReplyOnPause(StreamHandler):
|
||||
duration = len(audio) / sampling_rate
|
||||
|
||||
if duration >= self.algo_options.audio_chunk_duration:
|
||||
dur_vad = self.model.vad((sampling_rate, audio), self.model_options)
|
||||
dur_vad, _ = self.model.vad((sampling_rate, audio), self.model_options)
|
||||
logger.debug("VAD duration: %s", dur_vad)
|
||||
if (
|
||||
dur_vad > self.algo_options.started_talking_threshold
|
||||
@@ -144,14 +167,41 @@ class ReplyOnPause(StreamHandler):
|
||||
state.pause_detected = pause_detected
|
||||
|
||||
def receive(self, frame: tuple[int, np.ndarray]) -> None:
|
||||
if self.state.responding:
|
||||
if self.state.responding and not self.can_interrupt:
|
||||
return
|
||||
self.process_audio(frame, self.state)
|
||||
if self.state.pause_detected:
|
||||
self.event.set()
|
||||
if self.can_interrupt and self.state.responding:
|
||||
self._close_generator()
|
||||
self.generator = None
|
||||
if self.can_interrupt:
|
||||
self.clear_queue()
|
||||
|
||||
def _close_generator(self):
|
||||
"""Properly close the generator to ensure resources are released."""
|
||||
if self.generator is None:
|
||||
return
|
||||
|
||||
try:
|
||||
if self.is_async:
|
||||
# For async generators, we need to call aclose()
|
||||
if hasattr(self.generator, "aclose"):
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
cast(AsyncGenerator[EmitType, None], self.generator).aclose(),
|
||||
self.loop,
|
||||
).result(timeout=1.0) # Add timeout to prevent blocking
|
||||
else:
|
||||
# For sync generators, we can just exhaust it or close it
|
||||
if hasattr(self.generator, "close"):
|
||||
cast(Generator[EmitType, None, None], self.generator).close()
|
||||
except Exception as e:
|
||||
logger.debug(f"Error closing generator: {e}")
|
||||
|
||||
def reset(self):
|
||||
super().reset()
|
||||
if self.phone_mode:
|
||||
self.args_set.set()
|
||||
self.generator = None
|
||||
self.event.clear()
|
||||
self.state = AppState()
|
||||
@@ -164,25 +214,46 @@ class ReplyOnPause(StreamHandler):
|
||||
return None
|
||||
else:
|
||||
if not self.generator:
|
||||
self.send_message_sync(create_message("log", "pause_detected"))
|
||||
if self._needs_additional_inputs and not self.args_set.is_set():
|
||||
asyncio.run_coroutine_threadsafe(
|
||||
self.wait_for_args(), self.loop
|
||||
).result()
|
||||
if not self.phone_mode:
|
||||
self.wait_for_args_sync()
|
||||
else:
|
||||
self.latest_args = [None]
|
||||
self.args_set.set()
|
||||
logger.debug("Creating generator")
|
||||
audio = cast(np.ndarray, self.state.stream).reshape(1, -1)
|
||||
if self._needs_additional_inputs:
|
||||
self.latest_args[0] = (self.state.sampling_rate, audio)
|
||||
self.generator = self.fn(*self.latest_args)
|
||||
self.generator = self.fn(*self.latest_args) # type: ignore
|
||||
else:
|
||||
self.generator = self.fn((self.state.sampling_rate, audio)) # type: ignore
|
||||
logger.debug("Latest args: %s", self.latest_args)
|
||||
self.state = self.state.new()
|
||||
self.state.responding = True
|
||||
try:
|
||||
if self.is_async:
|
||||
return asyncio.run_coroutine_threadsafe(
|
||||
output = asyncio.run_coroutine_threadsafe(
|
||||
self.async_iterate(self.generator), self.loop
|
||||
).result()
|
||||
else:
|
||||
return next(self.generator)
|
||||
output = next(self.generator) # type: ignore
|
||||
audio, additional_outputs = split_output(output)
|
||||
if audio is not None:
|
||||
self.send_message_sync(create_message("log", "response_starting"))
|
||||
self.state.responded_audio = True
|
||||
if self.phone_mode:
|
||||
if additional_outputs:
|
||||
self.latest_args = [None] + list(additional_outputs.args)
|
||||
return output
|
||||
except (StopIteration, StopAsyncIteration):
|
||||
if not self.state.responded_audio:
|
||||
self.send_message_sync(create_message("log", "response_starting"))
|
||||
self.reset()
|
||||
except Exception as e:
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
logger.debug("Error in ReplyOnPause: %s", e)
|
||||
self.reset()
|
||||
raise e
|
||||
@@ -1,19 +1,20 @@
|
||||
import asyncio
|
||||
import logging
|
||||
import re
|
||||
from typing import Literal
|
||||
from typing import Callable, Literal
|
||||
|
||||
import numpy as np
|
||||
|
||||
from .reply_on_pause import (
|
||||
AlgoOptions,
|
||||
AppState,
|
||||
ModelOptions,
|
||||
PauseDetectionModel,
|
||||
ReplyFnGenerator,
|
||||
ReplyOnPause,
|
||||
SileroVadOptions,
|
||||
)
|
||||
from .speech_to_text import get_stt_model, stt_for_chunks
|
||||
from .utils import audio_to_float32
|
||||
from .utils import audio_to_float32, create_message
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -23,38 +24,49 @@ class ReplyOnStopWordsState(AppState):
|
||||
post_stop_word_buffer: np.ndarray | None = None
|
||||
started_talking_pre_stop_word: bool = False
|
||||
|
||||
def new(self):
|
||||
return ReplyOnStopWordsState()
|
||||
|
||||
|
||||
class ReplyOnStopWords(ReplyOnPause):
|
||||
def __init__(
|
||||
self,
|
||||
fn: ReplyFnGenerator,
|
||||
stop_words: list[str],
|
||||
startup_fn: Callable | None = None,
|
||||
algo_options: AlgoOptions | None = None,
|
||||
model_options: SileroVadOptions | None = None,
|
||||
model_options: ModelOptions | None = None,
|
||||
can_interrupt: bool = True,
|
||||
expected_layout: Literal["mono", "stereo"] = "mono",
|
||||
output_sample_rate: int = 24000,
|
||||
output_frame_size: int = 480,
|
||||
output_frame_size: int | None = None, # Deprecated
|
||||
input_sample_rate: int = 48000,
|
||||
model: PauseDetectionModel | None = None,
|
||||
):
|
||||
super().__init__(
|
||||
fn,
|
||||
algo_options=algo_options,
|
||||
startup_fn=startup_fn,
|
||||
model_options=model_options,
|
||||
can_interrupt=can_interrupt,
|
||||
expected_layout=expected_layout,
|
||||
output_sample_rate=output_sample_rate,
|
||||
output_frame_size=output_frame_size,
|
||||
input_sample_rate=input_sample_rate,
|
||||
model=model,
|
||||
)
|
||||
self.stop_words = stop_words
|
||||
self.state = ReplyOnStopWordsState()
|
||||
# Download Model
|
||||
get_stt_model()
|
||||
self.stt_model = get_stt_model("moonshine/base")
|
||||
|
||||
def stop_word_detected(self, text: str) -> bool:
|
||||
for stop_word in self.stop_words:
|
||||
stop_word = stop_word.lower().strip().split(" ")
|
||||
if bool(
|
||||
re.search(r"\b" + r"\s+".join(map(re.escape, stop_word)) + r"\b", text)
|
||||
re.search(
|
||||
r"\b" + r"\s+".join(map(re.escape, stop_word)) + r"[.,!?]*\b",
|
||||
text.lower(),
|
||||
)
|
||||
):
|
||||
logger.debug("Stop word detected: %s", stop_word)
|
||||
return True
|
||||
@@ -64,7 +76,7 @@ class ReplyOnStopWords(ReplyOnPause):
|
||||
self,
|
||||
):
|
||||
if self.channel:
|
||||
self.channel.send("stopword")
|
||||
self.channel.send(create_message("stopword", ""))
|
||||
logger.debug("Sent stopword")
|
||||
|
||||
def send_stopword(self):
|
||||
@@ -95,9 +107,10 @@ class ReplyOnStopWords(ReplyOnPause):
|
||||
dur_vad, chunks = self.model.vad(
|
||||
(16000, state.post_stop_word_buffer),
|
||||
self.model_options,
|
||||
return_chunks=True,
|
||||
)
|
||||
text = stt_for_chunks((16000, state.post_stop_word_buffer), chunks)
|
||||
text = stt_for_chunks(
|
||||
self.stt_model, (16000, state.post_stop_word_buffer), chunks
|
||||
)
|
||||
logger.debug(f"STT: {text}")
|
||||
state.stop_word_detected = self.stop_word_detected(text)
|
||||
if state.stop_word_detected:
|
||||
@@ -105,7 +118,7 @@ class ReplyOnStopWords(ReplyOnPause):
|
||||
self.send_stopword()
|
||||
state.buffer = None
|
||||
else:
|
||||
dur_vad = self.model.vad((sampling_rate, audio), self.model_options)
|
||||
dur_vad, _ = self.model.vad((sampling_rate, audio), self.model_options)
|
||||
logger.debug("VAD duration: %s", dur_vad)
|
||||
if (
|
||||
dur_vad > self.algo_options.started_talking_threshold
|
||||
@@ -138,10 +151,13 @@ class ReplyOnStopWords(ReplyOnPause):
|
||||
return ReplyOnStopWords(
|
||||
self.fn,
|
||||
self.stop_words,
|
||||
self.startup_fn,
|
||||
self.algo_options,
|
||||
self.model_options,
|
||||
self.can_interrupt,
|
||||
self.expected_layout,
|
||||
self.output_sample_rate,
|
||||
self.output_frame_size,
|
||||
self.input_sample_rate,
|
||||
self.model,
|
||||
)
|
||||
3
backend/fastrtc/speech_to_text/__init__.py
Normal file
3
backend/fastrtc/speech_to_text/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .stt_ import MoonshineSTT, get_stt_model, stt_for_chunks
|
||||
|
||||
__all__ = ["get_stt_model", "MoonshineSTT", "get_stt_model", "stt_for_chunks"]
|
||||
76
backend/fastrtc/speech_to_text/stt_.py
Normal file
76
backend/fastrtc/speech_to_text/stt_.py
Normal file
@@ -0,0 +1,76 @@
|
||||
from functools import lru_cache
|
||||
from pathlib import Path
|
||||
from typing import Literal, Protocol
|
||||
|
||||
import click
|
||||
import librosa
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from ..utils import AudioChunk, audio_to_float32
|
||||
|
||||
curr_dir = Path(__file__).parent
|
||||
|
||||
|
||||
class STTModel(Protocol):
|
||||
def stt(self, audio: tuple[int, NDArray[np.int16 | np.float32]]) -> str: ...
|
||||
|
||||
|
||||
class MoonshineSTT(STTModel):
|
||||
def __init__(
|
||||
self, model: Literal["moonshine/base", "moonshine/tiny"] = "moonshine/base"
|
||||
):
|
||||
try:
|
||||
from moonshine_onnx import MoonshineOnnxModel, load_tokenizer
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
raise ImportError(
|
||||
"Install fastrtc[stt] for speech-to-text and stopword detection support."
|
||||
)
|
||||
|
||||
self.model = MoonshineOnnxModel(model_name=model)
|
||||
self.tokenizer = load_tokenizer()
|
||||
|
||||
def stt(self, audio: tuple[int, NDArray[np.int16 | np.float32]]) -> str:
|
||||
sr, audio_np = audio # type: ignore
|
||||
if audio_np.dtype == np.int16:
|
||||
audio_np = audio_to_float32(audio)
|
||||
if sr != 16000:
|
||||
audio_np: NDArray[np.float32] = librosa.resample(
|
||||
audio_np, orig_sr=sr, target_sr=16000
|
||||
)
|
||||
if audio_np.ndim == 1:
|
||||
audio_np = audio_np.reshape(1, -1)
|
||||
tokens = self.model.generate(audio_np)
|
||||
return self.tokenizer.decode_batch(tokens)[0]
|
||||
|
||||
|
||||
@lru_cache
|
||||
def get_stt_model(
|
||||
model: Literal["moonshine/base", "moonshine/tiny"] = "moonshine/base",
|
||||
) -> STTModel:
|
||||
import os
|
||||
|
||||
os.environ["TOKENIZERS_PARALLELISM"] = "false"
|
||||
m = MoonshineSTT(model)
|
||||
from moonshine_onnx import load_audio
|
||||
|
||||
audio = load_audio(str(curr_dir / "test_file.wav"))
|
||||
print(click.style("INFO", fg="green") + ":\t Warming up STT model.")
|
||||
|
||||
m.stt((16000, audio))
|
||||
print(click.style("INFO", fg="green") + ":\t STT model warmed up.")
|
||||
return m
|
||||
|
||||
|
||||
def stt_for_chunks(
|
||||
stt_model: STTModel,
|
||||
audio: tuple[int, NDArray[np.int16 | np.float32]],
|
||||
chunks: list[AudioChunk],
|
||||
) -> str:
|
||||
sr, audio_np = audio
|
||||
return " ".join(
|
||||
[
|
||||
stt_model.stt((sr, audio_np[chunk["start"] : chunk["end"]]))
|
||||
for chunk in chunks
|
||||
]
|
||||
)
|
||||
BIN
backend/fastrtc/speech_to_text/test_file.wav
Normal file
BIN
backend/fastrtc/speech_to_text/test_file.wav
Normal file
Binary file not shown.
761
backend/fastrtc/stream.py
Normal file
761
backend/fastrtc/stream.py
Normal file
@@ -0,0 +1,761 @@
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import (
|
||||
Any,
|
||||
AsyncContextManager,
|
||||
Callable,
|
||||
Literal,
|
||||
Optional,
|
||||
TypedDict,
|
||||
cast,
|
||||
)
|
||||
|
||||
import gradio as gr
|
||||
from fastapi import FastAPI, Request, WebSocket
|
||||
from fastapi.responses import HTMLResponse
|
||||
from gradio import Blocks
|
||||
from gradio.components.base import Component
|
||||
from pydantic import BaseModel
|
||||
from typing_extensions import NotRequired
|
||||
|
||||
from .tracks import HandlerType, StreamHandlerImpl
|
||||
from .webrtc import WebRTC
|
||||
from .webrtc_connection_mixin import WebRTCConnectionMixin
|
||||
from .websocket import WebSocketHandler
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
curr_dir = Path(__file__).parent
|
||||
|
||||
|
||||
class Body(BaseModel):
|
||||
sdp: Optional[str] = None
|
||||
candidate: Optional[dict[str, Any]] = None
|
||||
type: str
|
||||
webrtc_id: str
|
||||
|
||||
|
||||
class UIArgs(TypedDict):
|
||||
title: NotRequired[str]
|
||||
"""Title of the demo"""
|
||||
subtitle: NotRequired[str]
|
||||
"""Subtitle of the demo. Text will be centered and displayed below the title."""
|
||||
icon: NotRequired[str]
|
||||
"""Icon to display on the button instead of the wave animation. The icon should be a path/url to a .svg/.png/.jpeg file."""
|
||||
icon_button_color: NotRequired[str]
|
||||
"""Color of the icon button. Default is var(--color-accent) of the demo theme."""
|
||||
pulse_color: NotRequired[str]
|
||||
"""Color of the pulse animation. Default is var(--color-accent) of the demo theme."""
|
||||
icon_radius: NotRequired[int]
|
||||
"""Border radius of the icon button expressed as a percentage of the button size. Default is 50%."""
|
||||
send_input_on: NotRequired[Literal["submit", "change"]]
|
||||
"""When to send the input to the handler. Default is "change".
|
||||
If "submit", the input will be sent when the submit event is triggered by the user.
|
||||
If "change", the input will be sent whenever the user changes the input value.
|
||||
"""
|
||||
|
||||
|
||||
class Stream(WebRTCConnectionMixin):
|
||||
def __init__(
|
||||
self,
|
||||
handler: HandlerType,
|
||||
*,
|
||||
additional_outputs_handler: Callable | None = None,
|
||||
mode: Literal["send-receive", "receive", "send"] = "send-receive",
|
||||
modality: Literal["video", "audio", "audio-video"] = "video",
|
||||
concurrency_limit: int | None | Literal["default"] = "default",
|
||||
time_limit: float | None = None,
|
||||
rtp_params: dict[str, Any] | None = None,
|
||||
rtc_configuration: dict[str, Any] | None = None,
|
||||
additional_inputs: list[Component] | None = None,
|
||||
additional_outputs: list[Component] | None = None,
|
||||
ui_args: UIArgs | None = None,
|
||||
):
|
||||
WebRTCConnectionMixin.__init__(self)
|
||||
self.mode = mode
|
||||
self.modality = modality
|
||||
self.rtp_params = rtp_params
|
||||
self.event_handler = handler
|
||||
self.concurrency_limit = cast(
|
||||
(int),
|
||||
1 if concurrency_limit in ["default", None] else concurrency_limit,
|
||||
)
|
||||
self.concurrency_limit_gradio = cast(
|
||||
int | Literal["default"] | None, concurrency_limit
|
||||
)
|
||||
self.time_limit = time_limit
|
||||
self.additional_output_components = additional_outputs
|
||||
self.additional_input_components = additional_inputs
|
||||
self.additional_outputs_handler = additional_outputs_handler
|
||||
self.rtc_configuration = rtc_configuration
|
||||
self._ui = self._generate_default_ui(ui_args)
|
||||
self._ui.launch = self._wrap_gradio_launch(self._ui.launch)
|
||||
|
||||
def mount(self, app: FastAPI, path: str = ""):
|
||||
from fastapi import APIRouter
|
||||
|
||||
router = APIRouter(prefix=path)
|
||||
router.post("/webrtc/offer")(self.offer)
|
||||
router.websocket("/telephone/handler")(self.telephone_handler)
|
||||
router.post("/telephone/incoming")(self.handle_incoming_call)
|
||||
router.websocket("/websocket/offer")(self.websocket_offer)
|
||||
lifespan = self._inject_startup_message(app.router.lifespan_context)
|
||||
app.router.lifespan_context = lifespan
|
||||
app.include_router(router)
|
||||
|
||||
@staticmethod
|
||||
def print_error(env: Literal["colab", "spaces"]):
|
||||
import click
|
||||
|
||||
print(
|
||||
click.style("ERROR", fg="red")
|
||||
+ f":\t Running in {env} is not possible without providing a valid rtc_configuration. "
|
||||
+ "See "
|
||||
+ click.style("https://fastrtc.org/deployment/", fg="cyan")
|
||||
+ " for more information."
|
||||
)
|
||||
raise RuntimeError(
|
||||
f"Running in {env} is not possible without providing a valid rtc_configuration. "
|
||||
+ "See https://fastrtc.org/deployment/ for more information."
|
||||
)
|
||||
|
||||
def _check_colab_or_spaces(self):
|
||||
from gradio.utils import colab_check, get_space
|
||||
|
||||
if colab_check() and not self.rtc_configuration:
|
||||
self.print_error("colab")
|
||||
if get_space() and not self.rtc_configuration:
|
||||
self.print_error("spaces")
|
||||
|
||||
def _wrap_gradio_launch(self, callable):
|
||||
import contextlib
|
||||
|
||||
def wrapper(*args, **kwargs):
|
||||
lifespan = kwargs.get("app_kwargs", {}).get("lifespan", None)
|
||||
|
||||
@contextlib.asynccontextmanager
|
||||
async def new_lifespan(app: FastAPI):
|
||||
if lifespan is None:
|
||||
self._check_colab_or_spaces()
|
||||
yield
|
||||
else:
|
||||
async with lifespan(app):
|
||||
self._check_colab_or_spaces()
|
||||
yield
|
||||
|
||||
if "app_kwargs" not in kwargs:
|
||||
kwargs["app_kwargs"] = {}
|
||||
kwargs["app_kwargs"]["lifespan"] = new_lifespan
|
||||
return callable(*args, **kwargs)
|
||||
|
||||
return wrapper
|
||||
|
||||
def _inject_startup_message(
|
||||
self, lifespan: Callable[[FastAPI], AsyncContextManager] | None = None
|
||||
):
|
||||
import contextlib
|
||||
|
||||
import click
|
||||
|
||||
def print_startup_message():
|
||||
self._check_colab_or_spaces()
|
||||
print(
|
||||
click.style("INFO", fg="green")
|
||||
+ ":\t Visit "
|
||||
+ click.style("https://fastrtc.org/userguide/api/", fg="cyan")
|
||||
+ " for WebRTC or Websocket API docs."
|
||||
)
|
||||
|
||||
@contextlib.asynccontextmanager
|
||||
async def new_lifespan(app: FastAPI):
|
||||
if lifespan is None:
|
||||
print_startup_message()
|
||||
yield
|
||||
else:
|
||||
async with lifespan(app):
|
||||
print_startup_message()
|
||||
yield
|
||||
|
||||
return new_lifespan
|
||||
|
||||
def _generate_default_ui(
|
||||
self,
|
||||
ui_args: UIArgs | None = None,
|
||||
):
|
||||
ui_args = ui_args or {}
|
||||
same_components = []
|
||||
additional_input_components = self.additional_input_components or []
|
||||
additional_output_components = self.additional_output_components or []
|
||||
if additional_output_components and not self.additional_outputs_handler:
|
||||
raise ValueError(
|
||||
"additional_outputs_handler must be provided if there are additional output components."
|
||||
)
|
||||
if additional_input_components and additional_output_components:
|
||||
same_components = [
|
||||
component
|
||||
for component in additional_input_components
|
||||
if component in additional_output_components
|
||||
]
|
||||
for component in additional_output_components:
|
||||
if component in same_components:
|
||||
same_components.append(component)
|
||||
if self.modality == "video" and self.mode == "receive":
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
f"""
|
||||
<h1 style='text-align: center'>
|
||||
{ui_args.get("title", "Video Streaming (Powered by FastRTC ⚡️)")}
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
if ui_args.get("subtitle"):
|
||||
gr.Markdown(
|
||||
f"""
|
||||
<div style='text-align: center'>
|
||||
{ui_args.get("subtitle")}
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Row():
|
||||
with gr.Column():
|
||||
if additional_input_components:
|
||||
for component in additional_input_components:
|
||||
component.render()
|
||||
button = gr.Button("Start Stream", variant="primary")
|
||||
with gr.Column():
|
||||
output_video = WebRTC(
|
||||
label="Video Stream",
|
||||
rtc_configuration=self.rtc_configuration,
|
||||
mode="receive",
|
||||
modality="video",
|
||||
)
|
||||
for component in additional_output_components:
|
||||
if component not in same_components:
|
||||
component.render()
|
||||
output_video.stream(
|
||||
fn=self.event_handler,
|
||||
inputs=self.additional_input_components,
|
||||
outputs=[output_video],
|
||||
trigger=button.click,
|
||||
time_limit=self.time_limit,
|
||||
concurrency_limit=self.concurrency_limit, # type: ignore
|
||||
send_input_on=ui_args.get("send_input_on", "change"),
|
||||
)
|
||||
if additional_output_components:
|
||||
assert self.additional_outputs_handler
|
||||
output_video.on_additional_outputs(
|
||||
self.additional_outputs_handler,
|
||||
concurrency_limit=self.concurrency_limit_gradio, # type: ignore
|
||||
inputs=additional_output_components,
|
||||
outputs=additional_output_components,
|
||||
)
|
||||
elif self.modality == "video" and self.mode == "send":
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
f"""
|
||||
<h1 style='text-align: center'>
|
||||
{ui_args.get("title", "Video Streaming (Powered by FastRTC ⚡️)")}
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
if ui_args.get("subtitle"):
|
||||
gr.Markdown(
|
||||
f"""
|
||||
<div style='text-align: center'>
|
||||
{ui_args.get("subtitle")}
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Row():
|
||||
if additional_input_components:
|
||||
with gr.Column():
|
||||
for component in additional_input_components:
|
||||
component.render()
|
||||
with gr.Column():
|
||||
output_video = WebRTC(
|
||||
label="Video Stream",
|
||||
rtc_configuration=self.rtc_configuration,
|
||||
mode="send",
|
||||
modality="video",
|
||||
)
|
||||
for component in additional_output_components:
|
||||
if component not in same_components:
|
||||
component.render()
|
||||
output_video.stream(
|
||||
fn=self.event_handler,
|
||||
inputs=[output_video] + additional_input_components,
|
||||
outputs=[output_video],
|
||||
time_limit=self.time_limit,
|
||||
concurrency_limit=self.concurrency_limit, # type: ignore
|
||||
send_input_on=ui_args.get("send_input_on", "change"),
|
||||
)
|
||||
if additional_output_components:
|
||||
assert self.additional_outputs_handler
|
||||
output_video.on_additional_outputs(
|
||||
self.additional_outputs_handler,
|
||||
concurrency_limit=self.concurrency_limit_gradio, # type: ignore
|
||||
inputs=additional_output_components,
|
||||
outputs=additional_output_components,
|
||||
)
|
||||
elif self.modality == "video" and self.mode == "send-receive":
|
||||
css = """.my-group {max-width: 600px !important; max-height: 600 !important;}
|
||||
.my-column {display: flex !important; justify-content: center !important; align-items: center !important};"""
|
||||
|
||||
with gr.Blocks(css=css) as demo:
|
||||
gr.HTML(
|
||||
f"""
|
||||
<h1 style='text-align: center'>
|
||||
{ui_args.get("title", "Video Streaming (Powered by FastRTC ⚡️)")}
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
if ui_args.get("subtitle"):
|
||||
gr.Markdown(
|
||||
f"""
|
||||
<div style='text-align: center'>
|
||||
{ui_args.get("subtitle")}
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Column(elem_classes=["my-column"]):
|
||||
with gr.Group(elem_classes=["my-group"]):
|
||||
image = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=self.rtc_configuration,
|
||||
mode="send-receive",
|
||||
modality="video",
|
||||
)
|
||||
for component in additional_input_components:
|
||||
component.render()
|
||||
if additional_output_components:
|
||||
with gr.Column():
|
||||
for component in additional_output_components:
|
||||
if component not in same_components:
|
||||
component.render()
|
||||
|
||||
image.stream(
|
||||
fn=self.event_handler,
|
||||
inputs=[image] + additional_input_components,
|
||||
outputs=[image],
|
||||
time_limit=self.time_limit,
|
||||
concurrency_limit=self.concurrency_limit, # type: ignore
|
||||
send_input_on=ui_args.get("send_input_on", "change"),
|
||||
)
|
||||
if additional_output_components:
|
||||
assert self.additional_outputs_handler
|
||||
image.on_additional_outputs(
|
||||
self.additional_outputs_handler,
|
||||
inputs=additional_output_components,
|
||||
outputs=additional_output_components,
|
||||
concurrency_limit=self.concurrency_limit_gradio, # type: ignore
|
||||
)
|
||||
elif self.modality == "audio" and self.mode == "receive":
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
f"""
|
||||
<h1 style='text-align: center'>
|
||||
{ui_args.get("title", "Audio Streaming (Powered by FastRTC ⚡️)")}
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
if ui_args.get("subtitle"):
|
||||
gr.Markdown(
|
||||
f"""
|
||||
<div style='text-align: center'>
|
||||
{ui_args.get("subtitle")}
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Row():
|
||||
with gr.Column():
|
||||
for component in additional_input_components:
|
||||
component.render()
|
||||
button = gr.Button("Start Stream", variant="primary")
|
||||
if additional_output_components:
|
||||
with gr.Column():
|
||||
output_video = WebRTC(
|
||||
label="Audio Stream",
|
||||
rtc_configuration=self.rtc_configuration,
|
||||
mode="receive",
|
||||
modality="audio",
|
||||
icon=ui_args.get("icon"),
|
||||
icon_button_color=ui_args.get("icon_button_color"),
|
||||
pulse_color=ui_args.get("pulse_color"),
|
||||
icon_radius=ui_args.get("icon_radius"),
|
||||
)
|
||||
for component in additional_output_components:
|
||||
if component not in same_components:
|
||||
component.render()
|
||||
output_video.stream(
|
||||
fn=self.event_handler,
|
||||
inputs=self.additional_input_components,
|
||||
outputs=[output_video],
|
||||
trigger=button.click,
|
||||
time_limit=self.time_limit,
|
||||
concurrency_limit=self.concurrency_limit, # type: ignore
|
||||
send_input_on=ui_args.get("send_input_on", "change"),
|
||||
)
|
||||
if additional_output_components:
|
||||
assert self.additional_outputs_handler
|
||||
output_video.on_additional_outputs(
|
||||
self.additional_outputs_handler,
|
||||
inputs=additional_output_components,
|
||||
outputs=additional_output_components,
|
||||
concurrency_limit=self.concurrency_limit_gradio, # type: ignore
|
||||
)
|
||||
elif self.modality == "audio" and self.mode == "send":
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
f"""
|
||||
<h1 style='text-align: center'>
|
||||
{ui_args.get("title", "Audio Streaming (Powered by FastRTC ⚡️)")}
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
if ui_args.get("subtitle"):
|
||||
gr.Markdown(
|
||||
f"""
|
||||
<div style='text-align: center'>
|
||||
{ui_args.get("subtitle")}
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Row():
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
image = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=self.rtc_configuration,
|
||||
mode="send",
|
||||
modality="audio",
|
||||
icon=ui_args.get("icon"),
|
||||
icon_button_color=ui_args.get("icon_button_color"),
|
||||
pulse_color=ui_args.get("pulse_color"),
|
||||
icon_radius=ui_args.get("icon_radius"),
|
||||
)
|
||||
for component in additional_input_components:
|
||||
if component not in same_components:
|
||||
component.render()
|
||||
if additional_output_components:
|
||||
with gr.Column():
|
||||
for component in additional_output_components:
|
||||
component.render()
|
||||
image.stream(
|
||||
fn=self.event_handler,
|
||||
inputs=[image] + additional_input_components,
|
||||
outputs=[image],
|
||||
time_limit=self.time_limit,
|
||||
concurrency_limit=self.concurrency_limit, # type: ignore
|
||||
send_input_on=ui_args.get("send_input_on", "change"),
|
||||
)
|
||||
if additional_output_components:
|
||||
assert self.additional_outputs_handler
|
||||
image.on_additional_outputs(
|
||||
self.additional_outputs_handler,
|
||||
inputs=additional_output_components,
|
||||
outputs=additional_output_components,
|
||||
concurrency_limit=self.concurrency_limit_gradio, # type: ignore
|
||||
)
|
||||
elif self.modality == "audio" and self.mode == "send-receive":
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
f"""
|
||||
<h1 style='text-align: center'>
|
||||
{ui_args.get("title", "Audio Streaming (Powered by FastRTC ⚡️)")}
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
if ui_args.get("subtitle"):
|
||||
gr.Markdown(
|
||||
f"""
|
||||
<div style='text-align: center'>
|
||||
{ui_args.get("subtitle")}
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Row():
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
image = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=self.rtc_configuration,
|
||||
mode="send-receive",
|
||||
modality="audio",
|
||||
icon=ui_args.get("icon"),
|
||||
icon_button_color=ui_args.get("icon_button_color"),
|
||||
pulse_color=ui_args.get("pulse_color"),
|
||||
icon_radius=ui_args.get("icon_radius"),
|
||||
)
|
||||
for component in additional_input_components:
|
||||
if component not in same_components:
|
||||
component.render()
|
||||
if additional_output_components:
|
||||
with gr.Column():
|
||||
for component in additional_output_components:
|
||||
component.render()
|
||||
|
||||
image.stream(
|
||||
fn=self.event_handler,
|
||||
inputs=[image] + additional_input_components,
|
||||
outputs=[image],
|
||||
time_limit=self.time_limit,
|
||||
concurrency_limit=self.concurrency_limit, # type: ignore
|
||||
send_input_on=ui_args.get("send_input_on", "change"),
|
||||
)
|
||||
if additional_output_components:
|
||||
assert self.additional_outputs_handler
|
||||
image.on_additional_outputs(
|
||||
self.additional_outputs_handler,
|
||||
inputs=additional_output_components,
|
||||
outputs=additional_output_components,
|
||||
concurrency_limit=self.concurrency_limit_gradio, # type: ignore
|
||||
)
|
||||
elif self.modality == "audio-video" and self.mode == "send-receive":
|
||||
css = """.my-group {max-width: 600px !important; max-height: 600 !important;}
|
||||
.my-column {display: flex !important; justify-content: center !important; align-items: center !important};"""
|
||||
with gr.Blocks(css=css) as demo:
|
||||
gr.HTML(
|
||||
f"""
|
||||
<h1 style='text-align: center'>
|
||||
{ui_args.get("title", "Audio Video Streaming (Powered by FastRTC ⚡️)")}
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
if ui_args.get("subtitle"):
|
||||
gr.Markdown(
|
||||
f"""
|
||||
<div style='text-align: center'>
|
||||
{ui_args.get("subtitle")}
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Row():
|
||||
with gr.Column(elem_classes=["my-column"]):
|
||||
with gr.Group(elem_classes=["my-group"]):
|
||||
image = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=self.rtc_configuration,
|
||||
mode="send-receive",
|
||||
modality="audio-video",
|
||||
icon=ui_args.get("icon"),
|
||||
icon_button_color=ui_args.get("icon_button_color"),
|
||||
pulse_color=ui_args.get("pulse_color"),
|
||||
icon_radius=ui_args.get("icon_radius"),
|
||||
)
|
||||
for component in additional_input_components:
|
||||
if component not in same_components:
|
||||
component.render()
|
||||
if additional_output_components:
|
||||
with gr.Column():
|
||||
for component in additional_output_components:
|
||||
component.render()
|
||||
|
||||
image.stream(
|
||||
fn=self.event_handler,
|
||||
inputs=[image] + additional_input_components,
|
||||
outputs=[image],
|
||||
time_limit=self.time_limit,
|
||||
concurrency_limit=self.concurrency_limit, # type: ignore
|
||||
send_input_on=ui_args.get("send_input_on", "change"),
|
||||
)
|
||||
if additional_output_components:
|
||||
assert self.additional_outputs_handler
|
||||
image.on_additional_outputs(
|
||||
self.additional_outputs_handler,
|
||||
inputs=additional_output_components,
|
||||
outputs=additional_output_components,
|
||||
concurrency_limit=self.concurrency_limit_gradio, # type: ignore
|
||||
)
|
||||
else:
|
||||
raise ValueError(f"Invalid modality: {self.modality} and mode: {self.mode}")
|
||||
return demo
|
||||
|
||||
@property
|
||||
def ui(self) -> Blocks:
|
||||
return self._ui
|
||||
|
||||
@ui.setter
|
||||
def ui(self, blocks: Blocks):
|
||||
self._ui = blocks
|
||||
|
||||
async def offer(self, body: Body):
|
||||
return await self.handle_offer(
|
||||
body.model_dump(), set_outputs=self.set_additional_outputs(body.webrtc_id)
|
||||
)
|
||||
|
||||
async def handle_incoming_call(self, request: Request):
|
||||
from twilio.twiml.voice_response import Connect, VoiceResponse
|
||||
|
||||
response = VoiceResponse()
|
||||
response.say("Connecting to the AI assistant.")
|
||||
connect = Connect()
|
||||
connect.stream(url=f"wss://{request.url.hostname}/telephone/handler")
|
||||
response.append(connect)
|
||||
response.say("The call has been disconnected.")
|
||||
return HTMLResponse(content=str(response), media_type="application/xml")
|
||||
|
||||
async def telephone_handler(self, websocket: WebSocket):
|
||||
handler = cast(StreamHandlerImpl, self.event_handler.copy()) # type: ignore
|
||||
handler.phone_mode = True
|
||||
|
||||
async def set_handler(s: str, a: WebSocketHandler):
|
||||
if len(self.connections) >= self.concurrency_limit: # type: ignore
|
||||
await cast(WebSocket, a.websocket).send_json(
|
||||
{
|
||||
"status": "failed",
|
||||
"meta": {
|
||||
"error": "concurrency_limit_reached",
|
||||
"limit": self.concurrency_limit,
|
||||
},
|
||||
}
|
||||
)
|
||||
await websocket.close()
|
||||
return
|
||||
|
||||
ws = WebSocketHandler(
|
||||
handler, set_handler, lambda s: None, lambda s: lambda a: None
|
||||
)
|
||||
await ws.handle_websocket(websocket)
|
||||
|
||||
async def websocket_offer(self, websocket: WebSocket):
|
||||
handler = cast(StreamHandlerImpl, self.event_handler.copy()) # type: ignore
|
||||
handler.phone_mode = False
|
||||
|
||||
async def set_handler(s: str, a: WebSocketHandler):
|
||||
if len(self.connections) >= self.concurrency_limit: # type: ignore
|
||||
await cast(WebSocket, a.websocket).send_json(
|
||||
{
|
||||
"status": "failed",
|
||||
"meta": {
|
||||
"error": "concurrency_limit_reached",
|
||||
"limit": self.concurrency_limit,
|
||||
},
|
||||
}
|
||||
)
|
||||
await websocket.close()
|
||||
return
|
||||
|
||||
self.connections[s] = [a] # type: ignore
|
||||
|
||||
def clean_up(s):
|
||||
self.clean_up(s)
|
||||
|
||||
ws = WebSocketHandler(
|
||||
handler, set_handler, clean_up, lambda s: self.set_additional_outputs(s)
|
||||
)
|
||||
await ws.handle_websocket(websocket)
|
||||
|
||||
def fastphone(
|
||||
self,
|
||||
token: str | None = None,
|
||||
host: str = "127.0.0.1",
|
||||
port: int = 8000,
|
||||
**kwargs,
|
||||
):
|
||||
import atexit
|
||||
import inspect
|
||||
import secrets
|
||||
import threading
|
||||
import time
|
||||
import urllib.parse
|
||||
|
||||
import click
|
||||
import httpx
|
||||
import uvicorn
|
||||
from gradio.networking import setup_tunnel
|
||||
from gradio.tunneling import CURRENT_TUNNELS
|
||||
from huggingface_hub import get_token
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
self.mount(app)
|
||||
|
||||
t = threading.Thread(
|
||||
target=uvicorn.run,
|
||||
args=(app,),
|
||||
kwargs={"host": host, "port": port, **kwargs},
|
||||
)
|
||||
t.start()
|
||||
|
||||
# Check if setup_tunnel accepts share_server_tls_certificate parameter
|
||||
setup_tunnel_params = inspect.signature(setup_tunnel).parameters
|
||||
tunnel_kwargs = {
|
||||
"local_host": host,
|
||||
"local_port": port,
|
||||
"share_token": secrets.token_urlsafe(32),
|
||||
"share_server_address": None,
|
||||
}
|
||||
if "share_server_tls_certificate" in setup_tunnel_params:
|
||||
tunnel_kwargs["share_server_tls_certificate"] = None
|
||||
|
||||
url = setup_tunnel(**tunnel_kwargs)
|
||||
host = urllib.parse.urlparse(url).netloc
|
||||
|
||||
URL = "https://api.fastrtc.org"
|
||||
try:
|
||||
r = httpx.post(
|
||||
URL + "/register",
|
||||
json={"url": host},
|
||||
headers={"Authorization": token or get_token() or ""},
|
||||
)
|
||||
except Exception:
|
||||
URL = "https://fastrtc-fastphone.hf.space"
|
||||
r = httpx.post(
|
||||
URL + "/register",
|
||||
json={"url": host},
|
||||
headers={"Authorization": token or get_token() or ""},
|
||||
)
|
||||
r.raise_for_status()
|
||||
data = r.json()
|
||||
code = f"{data['code']}"
|
||||
phone_number = data["phone"]
|
||||
reset_date = data["reset_date"]
|
||||
print(
|
||||
click.style("INFO", fg="green")
|
||||
+ ":\t Your FastPhone is now live! Call "
|
||||
+ click.style(phone_number, fg="cyan")
|
||||
+ " and use code "
|
||||
+ click.style(code, fg="cyan")
|
||||
+ " to connect to your stream."
|
||||
)
|
||||
minutes = str(int(data["time_remaining"] // 60)).zfill(2)
|
||||
seconds = str(int(data["time_remaining"] % 60)).zfill(2)
|
||||
print(
|
||||
click.style("INFO", fg="green")
|
||||
+ ":\t You have "
|
||||
+ click.style(f"{minutes}:{seconds}", fg="cyan")
|
||||
+ " minutes remaining in your quota (Resetting on "
|
||||
+ click.style(f"{reset_date}", fg="cyan")
|
||||
+ ")"
|
||||
)
|
||||
print(
|
||||
click.style("INFO", fg="green")
|
||||
+ ":\t Visit "
|
||||
+ click.style(
|
||||
"https://fastrtc.org/userguide/audio/#telephone-integration",
|
||||
fg="cyan",
|
||||
)
|
||||
+ " for information on making your handler compatible with phone usage."
|
||||
)
|
||||
|
||||
def unregister():
|
||||
httpx.post(
|
||||
URL + "/unregister",
|
||||
json={"url": host, "code": code},
|
||||
headers={"Authorization": token or get_token() or ""},
|
||||
)
|
||||
|
||||
atexit.register(unregister)
|
||||
|
||||
try:
|
||||
while True:
|
||||
time.sleep(0.1)
|
||||
except (KeyboardInterrupt, OSError):
|
||||
print(
|
||||
click.style("INFO", fg="green")
|
||||
+ ":\t Keyboard interruption in main thread... closing server."
|
||||
)
|
||||
unregister()
|
||||
t.join(timeout=5)
|
||||
for tunnel in CURRENT_TUNNELS:
|
||||
tunnel.kill()
|
||||
@@ -0,0 +1 @@
|
||||
(function(){"use strict";const R="https://unpkg.com/@ffmpeg/core@0.12.6/dist/umd/ffmpeg-core.js";var E;(function(t){t.LOAD="LOAD",t.EXEC="EXEC",t.WRITE_FILE="WRITE_FILE",t.READ_FILE="READ_FILE",t.DELETE_FILE="DELETE_FILE",t.RENAME="RENAME",t.CREATE_DIR="CREATE_DIR",t.LIST_DIR="LIST_DIR",t.DELETE_DIR="DELETE_DIR",t.ERROR="ERROR",t.DOWNLOAD="DOWNLOAD",t.PROGRESS="PROGRESS",t.LOG="LOG",t.MOUNT="MOUNT",t.UNMOUNT="UNMOUNT"})(E||(E={}));const a=new Error("unknown message type"),f=new Error("ffmpeg is not loaded, call `await ffmpeg.load()` first"),u=new Error("failed to import ffmpeg-core.js");let r;const O=async({coreURL:t,wasmURL:n,workerURL:e})=>{const o=!r;try{t||(t=R),importScripts(t)}catch{if(t||(t=R.replace("/umd/","/esm/")),self.createFFmpegCore=(await import(t)).default,!self.createFFmpegCore)throw u}const s=t,c=n||t.replace(/.js$/g,".wasm"),b=e||t.replace(/.js$/g,".worker.js");return r=await self.createFFmpegCore({mainScriptUrlOrBlob:`${s}#${btoa(JSON.stringify({wasmURL:c,workerURL:b}))}`}),r.setLogger(i=>self.postMessage({type:E.LOG,data:i})),r.setProgress(i=>self.postMessage({type:E.PROGRESS,data:i})),o},l=({args:t,timeout:n=-1})=>{r.setTimeout(n),r.exec(...t);const e=r.ret;return r.reset(),e},m=({path:t,data:n})=>(r.FS.writeFile(t,n),!0),D=({path:t,encoding:n})=>r.FS.readFile(t,{encoding:n}),S=({path:t})=>(r.FS.unlink(t),!0),I=({oldPath:t,newPath:n})=>(r.FS.rename(t,n),!0),L=({path:t})=>(r.FS.mkdir(t),!0),N=({path:t})=>{const n=r.FS.readdir(t),e=[];for(const o of n){const s=r.FS.stat(`${t}/${o}`),c=r.FS.isDir(s.mode);e.push({name:o,isDir:c})}return e},A=({path:t})=>(r.FS.rmdir(t),!0),w=({fsType:t,options:n,mountPoint:e})=>{const o=t,s=r.FS.filesystems[o];return s?(r.FS.mount(s,n,e),!0):!1},k=({mountPoint:t})=>(r.FS.unmount(t),!0);self.onmessage=async({data:{id:t,type:n,data:e}})=>{const o=[];let s;try{if(n!==E.LOAD&&!r)throw f;switch(n){case E.LOAD:s=await O(e);break;case E.EXEC:s=l(e);break;case E.WRITE_FILE:s=m(e);break;case E.READ_FILE:s=D(e);break;case E.DELETE_FILE:s=S(e);break;case E.RENAME:s=I(e);break;case E.CREATE_DIR:s=L(e);break;case E.LIST_DIR:s=N(e);break;case E.DELETE_DIR:s=A(e);break;case E.MOUNT:s=w(e);break;case E.UNMOUNT:s=k(e);break;default:throw a}}catch(c){self.postMessage({id:t,type:E.ERROR,data:c.toString()});return}s instanceof Uint8Array&&o.push(s.buffer),self.postMessage({id:t,type:n,data:s},o)}})();
|
||||
22745
backend/fastrtc/templates/component/index.js
Normal file
22745
backend/fastrtc/templates/component/index.js
Normal file
File diff suppressed because one or more lines are too long
1
backend/fastrtc/templates/component/style.css
Normal file
1
backend/fastrtc/templates/component/style.css
Normal file
File diff suppressed because one or more lines are too long
@@ -0,0 +1 @@
|
||||
(function(){"use strict";const R="https://unpkg.com/@ffmpeg/core@0.12.6/dist/umd/ffmpeg-core.js";var E;(function(t){t.LOAD="LOAD",t.EXEC="EXEC",t.WRITE_FILE="WRITE_FILE",t.READ_FILE="READ_FILE",t.DELETE_FILE="DELETE_FILE",t.RENAME="RENAME",t.CREATE_DIR="CREATE_DIR",t.LIST_DIR="LIST_DIR",t.DELETE_DIR="DELETE_DIR",t.ERROR="ERROR",t.DOWNLOAD="DOWNLOAD",t.PROGRESS="PROGRESS",t.LOG="LOG",t.MOUNT="MOUNT",t.UNMOUNT="UNMOUNT"})(E||(E={}));const a=new Error("unknown message type"),f=new Error("ffmpeg is not loaded, call `await ffmpeg.load()` first"),u=new Error("failed to import ffmpeg-core.js");let r;const O=async({coreURL:t,wasmURL:n,workerURL:e})=>{const o=!r;try{t||(t=R),importScripts(t)}catch{if(t||(t=R.replace("/umd/","/esm/")),self.createFFmpegCore=(await import(t)).default,!self.createFFmpegCore)throw u}const s=t,c=n||t.replace(/.js$/g,".wasm"),b=e||t.replace(/.js$/g,".worker.js");return r=await self.createFFmpegCore({mainScriptUrlOrBlob:`${s}#${btoa(JSON.stringify({wasmURL:c,workerURL:b}))}`}),r.setLogger(i=>self.postMessage({type:E.LOG,data:i})),r.setProgress(i=>self.postMessage({type:E.PROGRESS,data:i})),o},l=({args:t,timeout:n=-1})=>{r.setTimeout(n),r.exec(...t);const e=r.ret;return r.reset(),e},m=({path:t,data:n})=>(r.FS.writeFile(t,n),!0),D=({path:t,encoding:n})=>r.FS.readFile(t,{encoding:n}),S=({path:t})=>(r.FS.unlink(t),!0),I=({oldPath:t,newPath:n})=>(r.FS.rename(t,n),!0),L=({path:t})=>(r.FS.mkdir(t),!0),N=({path:t})=>{const n=r.FS.readdir(t),e=[];for(const o of n){const s=r.FS.stat(`${t}/${o}`),c=r.FS.isDir(s.mode);e.push({name:o,isDir:c})}return e},A=({path:t})=>(r.FS.rmdir(t),!0),w=({fsType:t,options:n,mountPoint:e})=>{const o=t,s=r.FS.filesystems[o];return s?(r.FS.mount(s,n,e),!0):!1},k=({mountPoint:t})=>(r.FS.unmount(t),!0);self.onmessage=async({data:{id:t,type:n,data:e}})=>{const o=[];let s;try{if(n!==E.LOAD&&!r)throw f;switch(n){case E.LOAD:s=await O(e);break;case E.EXEC:s=l(e);break;case E.WRITE_FILE:s=m(e);break;case E.READ_FILE:s=D(e);break;case E.DELETE_FILE:s=S(e);break;case E.RENAME:s=I(e);break;case E.CREATE_DIR:s=L(e);break;case E.LIST_DIR:s=N(e);break;case E.DELETE_DIR:s=A(e);break;case E.MOUNT:s=w(e);break;case E.UNMOUNT:s=k(e);break;default:throw a}}catch(c){self.postMessage({id:t,type:E.ERROR,data:c.toString()});return}s instanceof Uint8Array&&o.push(s.buffer),self.postMessage({id:t,type:n,data:s},o)}})();
|
||||
222
backend/fastrtc/templates/example/index.js
Normal file
222
backend/fastrtc/templates/example/index.js
Normal file
@@ -0,0 +1,222 @@
|
||||
var v;
|
||||
(function(e) {
|
||||
e.LOAD = "LOAD", e.EXEC = "EXEC", e.WRITE_FILE = "WRITE_FILE", e.READ_FILE = "READ_FILE", e.DELETE_FILE = "DELETE_FILE", e.RENAME = "RENAME", e.CREATE_DIR = "CREATE_DIR", e.LIST_DIR = "LIST_DIR", e.DELETE_DIR = "DELETE_DIR", e.ERROR = "ERROR", e.DOWNLOAD = "DOWNLOAD", e.PROGRESS = "PROGRESS", e.LOG = "LOG", e.MOUNT = "MOUNT", e.UNMOUNT = "UNMOUNT";
|
||||
})(v || (v = {}));
|
||||
const {
|
||||
SvelteComponent: X,
|
||||
append_hydration: T,
|
||||
attr: I,
|
||||
binding_callbacks: j,
|
||||
children: A,
|
||||
claim_element: N,
|
||||
claim_text: Q,
|
||||
detach: a,
|
||||
element: k,
|
||||
empty: b,
|
||||
init: z,
|
||||
insert_hydration: O,
|
||||
is_function: p,
|
||||
listen: L,
|
||||
noop: y,
|
||||
run_all: B,
|
||||
safe_not_equal: H,
|
||||
set_data: Y,
|
||||
src_url_equal: w,
|
||||
text: Z,
|
||||
toggle_class: d
|
||||
} = window.__gradio__svelte__internal;
|
||||
function S(e) {
|
||||
let l;
|
||||
function t(u, i) {
|
||||
return J;
|
||||
}
|
||||
let o = t()(e);
|
||||
return {
|
||||
c() {
|
||||
o.c(), l = b();
|
||||
},
|
||||
l(u) {
|
||||
o.l(u), l = b();
|
||||
},
|
||||
m(u, i) {
|
||||
o.m(u, i), O(u, l, i);
|
||||
},
|
||||
p(u, i) {
|
||||
o.p(u, i);
|
||||
},
|
||||
d(u) {
|
||||
u && a(l), o.d(u);
|
||||
}
|
||||
};
|
||||
}
|
||||
function J(e) {
|
||||
let l, t, n, o, u;
|
||||
return {
|
||||
c() {
|
||||
l = k("div"), t = k("video"), this.h();
|
||||
},
|
||||
l(i) {
|
||||
l = N(i, "DIV", { class: !0 });
|
||||
var c = A(l);
|
||||
t = N(c, "VIDEO", { src: !0 }), A(t).forEach(a), c.forEach(a), this.h();
|
||||
},
|
||||
h() {
|
||||
var i;
|
||||
w(t.src, n = /*value*/
|
||||
(i = e[2]) == null ? void 0 : i.video.url) || I(t, "src", n), I(l, "class", "container svelte-1uoo7dd"), d(
|
||||
l,
|
||||
"table",
|
||||
/*type*/
|
||||
e[0] === "table"
|
||||
), d(
|
||||
l,
|
||||
"gallery",
|
||||
/*type*/
|
||||
e[0] === "gallery"
|
||||
), d(
|
||||
l,
|
||||
"selected",
|
||||
/*selected*/
|
||||
e[1]
|
||||
);
|
||||
},
|
||||
m(i, c) {
|
||||
O(i, l, c), T(l, t), e[6](t), o || (u = [
|
||||
L(
|
||||
t,
|
||||
"loadeddata",
|
||||
/*init*/
|
||||
e[4]
|
||||
),
|
||||
L(t, "mouseover", function() {
|
||||
p(
|
||||
/*video*/
|
||||
e[3].play.bind(
|
||||
/*video*/
|
||||
e[3]
|
||||
)
|
||||
) && e[3].play.bind(
|
||||
/*video*/
|
||||
e[3]
|
||||
).apply(this, arguments);
|
||||
}),
|
||||
L(t, "mouseout", function() {
|
||||
p(
|
||||
/*video*/
|
||||
e[3].pause.bind(
|
||||
/*video*/
|
||||
e[3]
|
||||
)
|
||||
) && e[3].pause.bind(
|
||||
/*video*/
|
||||
e[3]
|
||||
).apply(this, arguments);
|
||||
})
|
||||
], o = !0);
|
||||
},
|
||||
p(i, c) {
|
||||
var _;
|
||||
e = i, c & /*value*/
|
||||
4 && !w(t.src, n = /*value*/
|
||||
(_ = e[2]) == null ? void 0 : _.video.url) && I(t, "src", n), c & /*type*/
|
||||
1 && d(
|
||||
l,
|
||||
"table",
|
||||
/*type*/
|
||||
e[0] === "table"
|
||||
), c & /*type*/
|
||||
1 && d(
|
||||
l,
|
||||
"gallery",
|
||||
/*type*/
|
||||
e[0] === "gallery"
|
||||
), c & /*selected*/
|
||||
2 && d(
|
||||
l,
|
||||
"selected",
|
||||
/*selected*/
|
||||
e[1]
|
||||
);
|
||||
},
|
||||
d(i) {
|
||||
i && a(l), e[6](null), o = !1, B(u);
|
||||
}
|
||||
};
|
||||
}
|
||||
function K(e) {
|
||||
let l, t = (
|
||||
/*value*/
|
||||
e[2] && S(e)
|
||||
);
|
||||
return {
|
||||
c() {
|
||||
t && t.c(), l = b();
|
||||
},
|
||||
l(n) {
|
||||
t && t.l(n), l = b();
|
||||
},
|
||||
m(n, o) {
|
||||
t && t.m(n, o), O(n, l, o);
|
||||
},
|
||||
p(n, [o]) {
|
||||
/*value*/
|
||||
n[2] ? t ? t.p(n, o) : (t = S(n), t.c(), t.m(l.parentNode, l)) : t && (t.d(1), t = null);
|
||||
},
|
||||
i: y,
|
||||
o: y,
|
||||
d(n) {
|
||||
n && a(l), t && t.d(n);
|
||||
}
|
||||
};
|
||||
}
|
||||
function P(e, l, t) {
|
||||
var n = this && this.__awaiter || function(f, G, s, R) {
|
||||
function W(E) {
|
||||
return E instanceof s ? E : new s(function(m) {
|
||||
m(E);
|
||||
});
|
||||
}
|
||||
return new (s || (s = Promise))(function(E, m) {
|
||||
function q(r) {
|
||||
try {
|
||||
h(R.next(r));
|
||||
} catch (D) {
|
||||
m(D);
|
||||
}
|
||||
}
|
||||
function V(r) {
|
||||
try {
|
||||
h(R.throw(r));
|
||||
} catch (D) {
|
||||
m(D);
|
||||
}
|
||||
}
|
||||
function h(r) {
|
||||
r.done ? E(r.value) : W(r.value).then(q, V);
|
||||
}
|
||||
h((R = R.apply(f, G || [])).next());
|
||||
});
|
||||
};
|
||||
let { type: o } = l, { selected: u = !1 } = l, { value: i } = l, { loop: c } = l, _;
|
||||
function U() {
|
||||
return n(this, void 0, void 0, function* () {
|
||||
t(3, _.muted = !0, _), t(3, _.playsInline = !0, _), t(3, _.controls = !1, _), _.setAttribute("muted", ""), yield _.play(), _.pause();
|
||||
});
|
||||
}
|
||||
function C(f) {
|
||||
j[f ? "unshift" : "push"](() => {
|
||||
_ = f, t(3, _);
|
||||
});
|
||||
}
|
||||
return e.$$set = (f) => {
|
||||
"type" in f && t(0, o = f.type), "selected" in f && t(1, u = f.selected), "value" in f && t(2, i = f.value), "loop" in f && t(5, c = f.loop);
|
||||
}, [o, u, i, _, U, c, C];
|
||||
}
|
||||
class M extends X {
|
||||
constructor(l) {
|
||||
super(), z(this, l, P, K, H, { type: 0, selected: 1, value: 2, loop: 5 });
|
||||
}
|
||||
}
|
||||
export {
|
||||
M as default
|
||||
};
|
||||
1
backend/fastrtc/templates/example/style.css
Normal file
1
backend/fastrtc/templates/example/style.css
Normal file
@@ -0,0 +1 @@
|
||||
.container.svelte-1uoo7dd{flex:none;max-width:none}.container.svelte-1uoo7dd video{width:var(--size-full);height:var(--size-full);object-fit:cover}.container.svelte-1uoo7dd:hover,.container.selected.svelte-1uoo7dd{border-color:var(--border-color-accent)}.container.table.svelte-1uoo7dd{margin:0 auto;border:2px solid var(--border-color-primary);border-radius:var(--radius-lg);overflow:hidden;width:var(--size-20);height:var(--size-20);object-fit:cover}.container.gallery.svelte-1uoo7dd{height:var(--size-20);max-height:var(--size-20);object-fit:cover}
|
||||
3
backend/fastrtc/text_to_speech/__init__.py
Normal file
3
backend/fastrtc/text_to_speech/__init__.py
Normal file
@@ -0,0 +1,3 @@
|
||||
from .tts import KokoroTTSOptions, get_tts_model
|
||||
|
||||
__all__ = ["get_tts_model", "KokoroTTSOptions"]
|
||||
13
backend/fastrtc/text_to_speech/test_tts.py
Normal file
13
backend/fastrtc/text_to_speech/test_tts.py
Normal file
@@ -0,0 +1,13 @@
|
||||
from fastrtc.text_to_speech.tts import get_tts_model
|
||||
|
||||
|
||||
def test_tts_long_prompt():
|
||||
model = get_tts_model()
|
||||
prompt = "It may be that this communication will be considered as a madman's freak but at any rate it must be admitted that in its clearness and frankness it left nothing to be desired The serious part of it was that the Federal Government had undertaken to treat a sale by auction as a valid concession of these undiscovered territories Opinions on the matter were many Some readers saw in it only one of those prodigious outbursts of American humbug which would exceed the limits of puffism if the depths of human credulity were not unfathomable"
|
||||
|
||||
for i, chunk in enumerate(model.stream_tts_sync(prompt)):
|
||||
print(f"Chunk {i}: {chunk[1].shape}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_tts_long_prompt()
|
||||
137
backend/fastrtc/text_to_speech/tts.py
Normal file
137
backend/fastrtc/text_to_speech/tts.py
Normal file
@@ -0,0 +1,137 @@
|
||||
import asyncio
|
||||
import re
|
||||
from dataclasses import dataclass
|
||||
from functools import lru_cache
|
||||
from typing import AsyncGenerator, Generator, Literal, Protocol
|
||||
|
||||
import numpy as np
|
||||
from huggingface_hub import hf_hub_download
|
||||
from numpy.typing import NDArray
|
||||
|
||||
|
||||
class TTSOptions:
|
||||
pass
|
||||
|
||||
|
||||
class TTSModel(Protocol):
|
||||
def tts(
|
||||
self, text: str, options: TTSOptions | None = None
|
||||
) -> tuple[int, NDArray[np.float32]]: ...
|
||||
|
||||
async def stream_tts(
|
||||
self, text: str, options: TTSOptions | None = None
|
||||
) -> AsyncGenerator[tuple[int, NDArray[np.float32]], None]: ...
|
||||
|
||||
def stream_tts_sync(
|
||||
self, text: str, options: TTSOptions | None = None
|
||||
) -> Generator[tuple[int, NDArray[np.float32]], None, None]: ...
|
||||
|
||||
|
||||
@dataclass
|
||||
class KokoroTTSOptions(TTSOptions):
|
||||
voice: str = "af_heart"
|
||||
speed: float = 1.0
|
||||
lang: str = "en-us"
|
||||
|
||||
|
||||
@lru_cache
|
||||
def get_tts_model(model: Literal["kokoro"] = "kokoro") -> TTSModel:
|
||||
m = KokoroTTSModel()
|
||||
m.tts("Hello, world!")
|
||||
return m
|
||||
|
||||
|
||||
class KokoroFixedBatchSize:
|
||||
# Source: https://github.com/thewh1teagle/kokoro-onnx/issues/115#issuecomment-2676625392
|
||||
def _split_phonemes(self, phonemes: str) -> list[str]:
|
||||
MAX_PHONEME_LENGTH = 510
|
||||
max_length = MAX_PHONEME_LENGTH - 1
|
||||
batched_phonemes = []
|
||||
while len(phonemes) > max_length:
|
||||
# Find best split point within limit
|
||||
split_idx = max_length
|
||||
|
||||
# Try to find the last period before max_length
|
||||
period_idx = phonemes.rfind(".", 0, max_length)
|
||||
if period_idx != -1:
|
||||
split_idx = period_idx + 1 # Include period
|
||||
|
||||
else:
|
||||
# Try other punctuation
|
||||
match = re.search(
|
||||
r"[!?;,]", phonemes[:max_length][::-1]
|
||||
) # Search backwards
|
||||
if match:
|
||||
split_idx = max_length - match.start()
|
||||
|
||||
else:
|
||||
# Try last space
|
||||
space_idx = phonemes.rfind(" ", 0, max_length)
|
||||
if space_idx != -1:
|
||||
split_idx = space_idx
|
||||
|
||||
# If no good split point is found, force split at max_length
|
||||
chunk = phonemes[:split_idx].strip()
|
||||
batched_phonemes.append(chunk)
|
||||
|
||||
# Move to the next part
|
||||
phonemes = phonemes[split_idx:].strip()
|
||||
|
||||
# Add remaining phonemes
|
||||
if phonemes:
|
||||
batched_phonemes.append(phonemes)
|
||||
return batched_phonemes
|
||||
|
||||
|
||||
class KokoroTTSModel(TTSModel):
|
||||
def __init__(self):
|
||||
from kokoro_onnx import Kokoro
|
||||
|
||||
self.model = Kokoro(
|
||||
model_path=hf_hub_download("fastrtc/kokoro-onnx", "kokoro-v1.0.onnx"),
|
||||
voices_path=hf_hub_download("fastrtc/kokoro-onnx", "voices-v1.0.bin"),
|
||||
)
|
||||
|
||||
self.model._split_phonemes = KokoroFixedBatchSize()._split_phonemes
|
||||
|
||||
def tts(
|
||||
self, text: str, options: KokoroTTSOptions | None = None
|
||||
) -> tuple[int, NDArray[np.float32]]:
|
||||
options = options or KokoroTTSOptions()
|
||||
a, b = self.model.create(
|
||||
text, voice=options.voice, speed=options.speed, lang=options.lang
|
||||
)
|
||||
return b, a
|
||||
|
||||
async def stream_tts(
|
||||
self, text: str, options: KokoroTTSOptions | None = None
|
||||
) -> AsyncGenerator[tuple[int, NDArray[np.float32]], None]:
|
||||
options = options or KokoroTTSOptions()
|
||||
|
||||
sentences = re.split(r"(?<=[.!?])\s+", text.strip())
|
||||
|
||||
for s_idx, sentence in enumerate(sentences):
|
||||
if not sentence.strip():
|
||||
continue
|
||||
|
||||
chunk_idx = 0
|
||||
async for chunk in self.model.create_stream(
|
||||
sentence, voice=options.voice, speed=options.speed, lang=options.lang
|
||||
):
|
||||
if s_idx != 0 and chunk_idx == 0:
|
||||
yield chunk[1], np.zeros(chunk[1] // 7, dtype=np.float32)
|
||||
chunk_idx += 1
|
||||
yield chunk[1], chunk[0]
|
||||
|
||||
def stream_tts_sync(
|
||||
self, text: str, options: KokoroTTSOptions | None = None
|
||||
) -> Generator[tuple[int, NDArray[np.float32]], None, None]:
|
||||
loop = asyncio.new_event_loop()
|
||||
|
||||
# Use the new loop to run the async generator
|
||||
iterator = self.stream_tts(text, options).__aiter__()
|
||||
while True:
|
||||
try:
|
||||
yield loop.run_until_complete(iterator.__anext__())
|
||||
except StopAsyncIteration:
|
||||
break
|
||||
876
backend/fastrtc/tracks.py
Normal file
876
backend/fastrtc/tracks.py
Normal file
@@ -0,0 +1,876 @@
|
||||
"""WebRTC tracks."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import fractions
|
||||
import functools
|
||||
import inspect
|
||||
import logging
|
||||
import threading
|
||||
import time
|
||||
import traceback
|
||||
import warnings
|
||||
from abc import ABC, abstractmethod
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass
|
||||
from typing import (
|
||||
Any,
|
||||
Generator,
|
||||
Literal,
|
||||
Tuple,
|
||||
TypeAlias,
|
||||
Union,
|
||||
cast,
|
||||
)
|
||||
|
||||
import anyio.to_thread
|
||||
import av
|
||||
import numpy as np
|
||||
from aiortc import (
|
||||
AudioStreamTrack,
|
||||
MediaStreamTrack,
|
||||
VideoStreamTrack,
|
||||
)
|
||||
from aiortc.contrib.media import AudioFrame, VideoFrame # type: ignore
|
||||
from aiortc.mediastreams import VIDEO_CLOCK_RATE, VIDEO_TIME_BASE, MediaStreamError
|
||||
from numpy import typing as npt
|
||||
|
||||
from fastrtc.utils import (
|
||||
AdditionalOutputs,
|
||||
CloseStream,
|
||||
Context,
|
||||
DataChannel,
|
||||
WebRTCError,
|
||||
create_message,
|
||||
current_channel,
|
||||
current_context,
|
||||
player_worker_decode,
|
||||
split_output,
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
VideoNDArray: TypeAlias = Union[
|
||||
np.ndarray[Any, np.dtype[np.uint8]],
|
||||
np.ndarray[Any, np.dtype[np.uint16]],
|
||||
np.ndarray[Any, np.dtype[np.float32]],
|
||||
]
|
||||
|
||||
VideoEmitType = (
|
||||
VideoNDArray
|
||||
| tuple[VideoNDArray, AdditionalOutputs]
|
||||
| tuple[VideoNDArray, CloseStream]
|
||||
| AdditionalOutputs
|
||||
| CloseStream
|
||||
)
|
||||
VideoEventGenerator = Generator[VideoEmitType, None, None]
|
||||
VideoEventHandler = Callable[[npt.ArrayLike], VideoEmitType | VideoEventGenerator]
|
||||
|
||||
|
||||
@dataclass
|
||||
class VideoStreamHandler:
|
||||
callable: VideoEventHandler
|
||||
fps: int = 30
|
||||
skip_frames: bool = False
|
||||
|
||||
|
||||
class VideoCallback(VideoStreamTrack):
|
||||
"""
|
||||
This works for streaming input and output
|
||||
"""
|
||||
|
||||
kind = "video"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
track: MediaStreamTrack,
|
||||
event_handler: VideoEventHandler,
|
||||
context: Context,
|
||||
channel: DataChannel | None = None,
|
||||
set_additional_outputs: Callable | None = None,
|
||||
mode: Literal["send-receive", "send"] = "send-receive",
|
||||
fps: int = 30,
|
||||
skip_frames: bool = False,
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.track = track
|
||||
self.event_handler = event_handler
|
||||
self.latest_args: str | list[Any] = "not_set"
|
||||
self.channel = channel
|
||||
self.set_additional_outputs = set_additional_outputs
|
||||
self.thread_quit = asyncio.Event()
|
||||
self.mode = mode
|
||||
self.channel_set = asyncio.Event()
|
||||
self.has_started = False
|
||||
self.fps = fps
|
||||
self.frame_ptime = 1.0 / fps
|
||||
self.skip_frames = skip_frames
|
||||
self.frame_queue: asyncio.Queue[VideoFrame] = asyncio.Queue()
|
||||
self.latest_frame = None
|
||||
self.context = context
|
||||
|
||||
def set_channel(self, channel: DataChannel):
|
||||
self.channel = channel
|
||||
current_channel.set(channel)
|
||||
current_context.set(self.context)
|
||||
self.channel_set.set()
|
||||
|
||||
def set_args(self, args: list[Any]):
|
||||
self.latest_args = ["__webrtc_value__"] + list(args)
|
||||
|
||||
def add_frame_to_payload(
|
||||
self, args: list[Any], frame: np.ndarray | None
|
||||
) -> list[Any]:
|
||||
new_args = []
|
||||
for val in args:
|
||||
if isinstance(val, str) and val == "__webrtc_value__":
|
||||
new_args.append(frame)
|
||||
else:
|
||||
new_args.append(val)
|
||||
return new_args
|
||||
|
||||
def array_to_frame(self, array: np.ndarray) -> VideoFrame:
|
||||
return VideoFrame.from_ndarray(array, format="bgr24")
|
||||
|
||||
async def process_frames(self):
|
||||
while not self.thread_quit.is_set():
|
||||
try:
|
||||
await self.recv()
|
||||
except TimeoutError:
|
||||
continue
|
||||
|
||||
async def start(
|
||||
self,
|
||||
):
|
||||
asyncio.create_task(self.process_frames())
|
||||
|
||||
def stop(self):
|
||||
super().stop()
|
||||
logger.debug("video callback stop")
|
||||
self.thread_quit.set()
|
||||
|
||||
async def wait_for_channel(self):
|
||||
current_context.set(self.context)
|
||||
if not self.channel_set.is_set():
|
||||
await self.channel_set.wait()
|
||||
if current_channel.get() != self.channel:
|
||||
current_channel.set(self.channel)
|
||||
|
||||
async def accept_input(self):
|
||||
self.has_started = True
|
||||
while not self.thread_quit.is_set():
|
||||
try:
|
||||
frame = cast(VideoFrame, await self.track.recv())
|
||||
self.latest_frame = frame
|
||||
self.frame_queue.put_nowait(frame)
|
||||
except MediaStreamError:
|
||||
self.stop()
|
||||
return
|
||||
|
||||
def accept_input_in_background(self):
|
||||
if not self.has_started:
|
||||
asyncio.create_task(self.accept_input())
|
||||
|
||||
async def recv(self): # type: ignore
|
||||
self.accept_input_in_background()
|
||||
try:
|
||||
frame = await self.frame_queue.get()
|
||||
if self.skip_frames:
|
||||
frame = self.latest_frame
|
||||
await self.wait_for_channel()
|
||||
frame_array = frame.to_ndarray(format="bgr24") # type: ignore
|
||||
if self.latest_args == "not_set":
|
||||
return frame
|
||||
|
||||
args = self.add_frame_to_payload(cast(list, self.latest_args), frame_array)
|
||||
array, outputs = split_output(self.event_handler(*args))
|
||||
if isinstance(outputs, CloseStream):
|
||||
cast(DataChannel, self.channel).send(
|
||||
create_message("end_stream", outputs.msg)
|
||||
)
|
||||
self.stop()
|
||||
return None
|
||||
if (
|
||||
isinstance(outputs, AdditionalOutputs)
|
||||
and self.set_additional_outputs
|
||||
and self.channel
|
||||
):
|
||||
self.set_additional_outputs(outputs)
|
||||
self.channel.send(create_message("fetch_output", []))
|
||||
if array is None and self.mode == "send":
|
||||
return
|
||||
|
||||
new_frame = self.array_to_frame(array)
|
||||
if frame:
|
||||
new_frame.pts = frame.pts
|
||||
new_frame.time_base = frame.time_base
|
||||
else:
|
||||
pts, time_base = await self.next_timestamp()
|
||||
new_frame.pts = pts
|
||||
new_frame.time_base = time_base
|
||||
return new_frame
|
||||
except Exception as e:
|
||||
logger.debug("exception %s", e)
|
||||
exec = traceback.format_exc()
|
||||
logger.debug("traceback %s", exec)
|
||||
if isinstance(e, WebRTCError):
|
||||
raise e
|
||||
else:
|
||||
raise WebRTCError(str(e)) from e
|
||||
|
||||
async def next_timestamp(self) -> Tuple[int, fractions.Fraction]:
|
||||
"""Override to control frame rate"""
|
||||
if self.readyState != "live":
|
||||
raise MediaStreamError
|
||||
|
||||
if hasattr(self, "_timestamp"):
|
||||
self._timestamp += int(self.frame_ptime * VIDEO_CLOCK_RATE)
|
||||
wait = self._start + (self._timestamp / VIDEO_CLOCK_RATE) - time.time()
|
||||
if wait > 0:
|
||||
await asyncio.sleep(wait)
|
||||
else:
|
||||
self._start = time.time()
|
||||
self._timestamp = 0
|
||||
return self._timestamp, VIDEO_TIME_BASE
|
||||
|
||||
|
||||
class StreamHandlerBase(ABC):
|
||||
def __init__(
|
||||
self,
|
||||
expected_layout: Literal["mono", "stereo"] = "mono",
|
||||
output_sample_rate: int = 24000,
|
||||
output_frame_size: int | None = None,
|
||||
input_sample_rate: int = 48000,
|
||||
fps: int = 30,
|
||||
) -> None:
|
||||
self.expected_layout = expected_layout
|
||||
self.output_sample_rate = output_sample_rate
|
||||
self.input_sample_rate = input_sample_rate
|
||||
self.fps = fps
|
||||
self.latest_args: list[Any] = []
|
||||
self._resampler = None
|
||||
self._channel: DataChannel | None = None
|
||||
self._loop: asyncio.AbstractEventLoop
|
||||
self.args_set = asyncio.Event()
|
||||
self.channel_set = asyncio.Event()
|
||||
self._phone_mode = False
|
||||
self._clear_queue: Callable | None = None
|
||||
|
||||
sample_rate_to_frame_size_coef = 50
|
||||
if output_sample_rate % sample_rate_to_frame_size_coef != 0:
|
||||
raise ValueError(
|
||||
"output_sample_rate must be a multiple of "
|
||||
f"{sample_rate_to_frame_size_coef}, got {output_sample_rate}"
|
||||
)
|
||||
|
||||
actual_output_frame_size = output_sample_rate // sample_rate_to_frame_size_coef
|
||||
if (
|
||||
output_frame_size is not None
|
||||
and output_frame_size != actual_output_frame_size
|
||||
):
|
||||
warnings.warn(
|
||||
"The output_frame_size parameter is deprecated and will be removed "
|
||||
"in a future release. The value passed in will be ignored. "
|
||||
f"The actual output frame size is {actual_output_frame_size}, "
|
||||
f"corresponding to {1 / sample_rate_to_frame_size_coef:.2f}s "
|
||||
f"at {output_sample_rate=}Hz.",
|
||||
# DeprecationWarning is filtered out by default, so use UserWarning
|
||||
UserWarning,
|
||||
stacklevel=2, # So that the warning points to the user's code
|
||||
)
|
||||
self.output_frame_size = actual_output_frame_size
|
||||
|
||||
@property
|
||||
def clear_queue(self) -> Callable:
|
||||
return cast(Callable, self._clear_queue)
|
||||
|
||||
@property
|
||||
def loop(self) -> asyncio.AbstractEventLoop:
|
||||
return cast(asyncio.AbstractEventLoop, self._loop)
|
||||
|
||||
@property
|
||||
def channel(self) -> DataChannel | None:
|
||||
return self._channel
|
||||
|
||||
@property
|
||||
def phone_mode(self) -> bool:
|
||||
return self._phone_mode
|
||||
|
||||
@phone_mode.setter
|
||||
def phone_mode(self, value: bool):
|
||||
self._phone_mode = value
|
||||
|
||||
def set_channel(self, channel: DataChannel):
|
||||
self._channel = channel
|
||||
self.channel_set.set()
|
||||
|
||||
async def fetch_args(
|
||||
self,
|
||||
):
|
||||
if self.channel:
|
||||
self.channel.send(create_message("send_input", []))
|
||||
logger.debug("Sent send_input")
|
||||
|
||||
async def wait_for_args(self):
|
||||
if not self.phone_mode:
|
||||
await self.fetch_args()
|
||||
await self.args_set.wait()
|
||||
else:
|
||||
self.args_set.set()
|
||||
|
||||
def wait_for_args_sync(self):
|
||||
try:
|
||||
asyncio.run_coroutine_threadsafe(self.wait_for_args(), self.loop).result()
|
||||
except Exception:
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
|
||||
async def send_message(self, msg: str):
|
||||
if self.channel:
|
||||
self.channel.send(msg)
|
||||
logger.debug("Sent msg %s", msg)
|
||||
|
||||
def send_message_sync(self, msg: str):
|
||||
try:
|
||||
asyncio.run_coroutine_threadsafe(self.send_message(msg), self.loop).result()
|
||||
logger.debug("Sent msg %s", msg)
|
||||
except Exception as e:
|
||||
logger.debug("Exception sending msg %s", e)
|
||||
|
||||
def set_args(self, args: list[Any]):
|
||||
logger.debug("setting args in audio callback %s", args)
|
||||
self.latest_args = ["__webrtc_value__"] + list(args)
|
||||
self.args_set.set()
|
||||
|
||||
def reset(self):
|
||||
self.args_set.clear()
|
||||
|
||||
def shutdown(self):
|
||||
pass
|
||||
|
||||
def resample(self, frame: AudioFrame) -> Generator[AudioFrame, None, None]:
|
||||
if self._resampler is None:
|
||||
self._resampler = av.AudioResampler( # type: ignore
|
||||
format="s16",
|
||||
layout=self.expected_layout,
|
||||
rate=self.input_sample_rate,
|
||||
frame_size=frame.samples,
|
||||
)
|
||||
yield from self._resampler.resample(frame)
|
||||
|
||||
|
||||
EmitType: TypeAlias = (
|
||||
tuple[int, npt.NDArray[np.int16 | np.float32]]
|
||||
| tuple[int, npt.NDArray[np.int16 | np.float32], Literal["mono", "stereo"]]
|
||||
| AdditionalOutputs
|
||||
| tuple[tuple[int, npt.NDArray[np.int16 | np.float32]], AdditionalOutputs]
|
||||
| None
|
||||
)
|
||||
AudioEmitType = EmitType
|
||||
|
||||
|
||||
class StreamHandler(StreamHandlerBase):
|
||||
@abstractmethod
|
||||
def receive(self, frame: tuple[int, npt.NDArray[np.int16]]) -> None:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def emit(self) -> EmitType:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def copy(self) -> StreamHandler:
|
||||
pass
|
||||
|
||||
def start_up(self):
|
||||
pass
|
||||
|
||||
|
||||
class AsyncStreamHandler(StreamHandlerBase):
|
||||
@abstractmethod
|
||||
async def receive(self, frame: tuple[int, npt.NDArray[np.int16]]) -> None:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def emit(self) -> EmitType:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def copy(self) -> AsyncStreamHandler:
|
||||
pass
|
||||
|
||||
async def start_up(self):
|
||||
pass
|
||||
|
||||
|
||||
StreamHandlerImpl = StreamHandler | AsyncStreamHandler
|
||||
|
||||
|
||||
class AudioVideoStreamHandler(StreamHandler):
|
||||
@abstractmethod
|
||||
def video_receive(self, frame: VideoFrame) -> None:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def video_emit(self) -> VideoEmitType:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def copy(self) -> AudioVideoStreamHandler:
|
||||
pass
|
||||
|
||||
|
||||
class AsyncAudioVideoStreamHandler(AsyncStreamHandler):
|
||||
@abstractmethod
|
||||
async def video_receive(self, frame: npt.NDArray[np.float32]) -> None:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
async def video_emit(self) -> VideoEmitType:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def copy(self) -> AsyncAudioVideoStreamHandler:
|
||||
pass
|
||||
|
||||
|
||||
VideoStreamHandlerImpl = AudioVideoStreamHandler | AsyncAudioVideoStreamHandler
|
||||
AudioVideoStreamHandlerImpl = AudioVideoStreamHandler | AsyncAudioVideoStreamHandler
|
||||
AsyncHandler = AsyncStreamHandler | AsyncAudioVideoStreamHandler
|
||||
|
||||
HandlerType = (
|
||||
StreamHandlerImpl
|
||||
| VideoStreamHandlerImpl
|
||||
| VideoEventHandler
|
||||
| Callable
|
||||
| VideoStreamHandler
|
||||
)
|
||||
|
||||
|
||||
class VideoStreamHandler_(VideoCallback):
|
||||
async def process_frames(self):
|
||||
while not self.thread_quit.is_set():
|
||||
try:
|
||||
await self.channel_set.wait()
|
||||
frame = cast(VideoFrame, await self.track.recv())
|
||||
frame_array = frame.to_ndarray(format="bgr24")
|
||||
handler = cast(VideoStreamHandlerImpl, self.event_handler)
|
||||
if inspect.iscoroutinefunction(handler.video_receive):
|
||||
await handler.video_receive(frame_array)
|
||||
else:
|
||||
handler.video_receive(frame_array) # type: ignore
|
||||
except MediaStreamError:
|
||||
self.stop()
|
||||
|
||||
async def start(self):
|
||||
if not self.has_started:
|
||||
asyncio.create_task(self.process_frames())
|
||||
self.has_started = True
|
||||
|
||||
async def recv(self): # type: ignore
|
||||
await self.start()
|
||||
try:
|
||||
handler = cast(VideoStreamHandlerImpl, self.event_handler)
|
||||
if inspect.iscoroutinefunction(handler.video_emit):
|
||||
outputs = await handler.video_emit()
|
||||
else:
|
||||
outputs = handler.video_emit()
|
||||
|
||||
array, outputs = split_output(outputs)
|
||||
if (
|
||||
isinstance(outputs, AdditionalOutputs)
|
||||
and self.set_additional_outputs
|
||||
and self.channel
|
||||
):
|
||||
self.set_additional_outputs(outputs)
|
||||
self.channel.send(create_message("fetch_output", []))
|
||||
if isinstance(outputs, CloseStream):
|
||||
cast(DataChannel, self.channel).send(
|
||||
create_message("end_stream", outputs.msg)
|
||||
)
|
||||
self.stop()
|
||||
return
|
||||
if array is None and self.mode == "send":
|
||||
return
|
||||
|
||||
new_frame = self.array_to_frame(array)
|
||||
|
||||
# Will probably have to give developer ability to set pts and time_base
|
||||
pts, time_base = await self.next_timestamp()
|
||||
new_frame.pts = pts
|
||||
new_frame.time_base = time_base
|
||||
|
||||
return new_frame
|
||||
except Exception as e:
|
||||
logger.debug("exception %s", e)
|
||||
exec = traceback.format_exc()
|
||||
logger.debug("traceback %s", exec)
|
||||
|
||||
|
||||
class AudioCallback(AudioStreamTrack):
|
||||
kind = "audio"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
track: MediaStreamTrack,
|
||||
event_handler: StreamHandlerBase,
|
||||
context: Context,
|
||||
channel: DataChannel | None = None,
|
||||
set_additional_outputs: Callable | None = None,
|
||||
) -> None:
|
||||
super().__init__()
|
||||
self.track = track
|
||||
self.event_handler = cast(StreamHandlerImpl, event_handler)
|
||||
self.event_handler._clear_queue = self.clear_queue
|
||||
self.current_timestamp = 0
|
||||
self.latest_args: str | list[Any] = "not_set"
|
||||
self.queue = asyncio.Queue()
|
||||
self.thread_quit = asyncio.Event()
|
||||
self._start: float | None = None
|
||||
self.has_started = False
|
||||
self.last_timestamp = 0
|
||||
self.channel = channel
|
||||
self.set_additional_outputs = set_additional_outputs
|
||||
self.context = context
|
||||
|
||||
def clear_queue(self):
|
||||
logger.debug("clearing queue")
|
||||
logger.debug("queue size: %d", self.queue.qsize())
|
||||
i = 0
|
||||
while not self.queue.empty():
|
||||
self.queue.get_nowait()
|
||||
i += 1
|
||||
logger.debug("popped %d items from queue", i)
|
||||
self._start = None
|
||||
|
||||
async def wait_for_channel(self):
|
||||
current_context.set(self.context)
|
||||
if not self.event_handler.channel_set.is_set():
|
||||
await self.event_handler.channel_set.wait()
|
||||
if current_channel.get() != self.event_handler.channel:
|
||||
current_channel.set(self.event_handler.channel)
|
||||
|
||||
def set_channel(self, channel: DataChannel):
|
||||
self.channel = channel
|
||||
self.event_handler.set_channel(channel)
|
||||
|
||||
def set_args(self, args: list[Any]):
|
||||
self.event_handler.set_args(args)
|
||||
|
||||
def event_handler_receive(self, frame: tuple[int, np.ndarray]) -> None:
|
||||
current_channel.set(self.event_handler.channel)
|
||||
return cast(Callable, self.event_handler.receive)(frame)
|
||||
|
||||
def event_handler_emit(self) -> EmitType:
|
||||
current_channel.set(self.event_handler.channel)
|
||||
current_context.set(self.context)
|
||||
return cast(Callable, self.event_handler.emit)()
|
||||
|
||||
async def process_input_frames(self) -> None:
|
||||
while not self.thread_quit.is_set():
|
||||
try:
|
||||
frame = cast(AudioFrame, await self.track.recv())
|
||||
for frame in self.event_handler.resample(frame):
|
||||
numpy_array = frame.to_ndarray()
|
||||
if isinstance(self.event_handler, AsyncHandler):
|
||||
await self.event_handler.receive(
|
||||
(frame.sample_rate, numpy_array) # type: ignore
|
||||
)
|
||||
else:
|
||||
await anyio.to_thread.run_sync(
|
||||
self.event_handler_receive, (frame.sample_rate, numpy_array)
|
||||
)
|
||||
except MediaStreamError:
|
||||
logger.debug("MediaStreamError in process_input_frames")
|
||||
break
|
||||
|
||||
async def start(self):
|
||||
if not self.has_started:
|
||||
loop = asyncio.get_running_loop()
|
||||
await self.wait_for_channel()
|
||||
if isinstance(self.event_handler, AsyncHandler):
|
||||
callable = self.event_handler.emit
|
||||
start_up = self.event_handler.start_up()
|
||||
if not inspect.isawaitable(start_up):
|
||||
raise WebRTCError(
|
||||
"In AsyncStreamHandler, start_up must be a coroutine (async def)"
|
||||
)
|
||||
|
||||
else:
|
||||
callable = functools.partial(
|
||||
loop.run_in_executor, None, self.event_handler_emit
|
||||
)
|
||||
start_up = anyio.to_thread.run_sync(self.event_handler.start_up)
|
||||
self.process_input_task = asyncio.create_task(self.process_input_frames())
|
||||
self.process_input_task.add_done_callback(
|
||||
lambda _: logger.debug("process_input_done")
|
||||
)
|
||||
self.start_up_task = asyncio.create_task(start_up)
|
||||
self.start_up_task.add_done_callback(
|
||||
lambda _: logger.debug("start_up_done")
|
||||
)
|
||||
self.decode_task = asyncio.create_task(
|
||||
player_worker_decode(
|
||||
callable,
|
||||
self.queue,
|
||||
self.thread_quit,
|
||||
lambda: self.channel,
|
||||
self.set_additional_outputs,
|
||||
False,
|
||||
self.event_handler.output_sample_rate,
|
||||
self.event_handler.output_frame_size,
|
||||
)
|
||||
)
|
||||
self.decode_task.add_done_callback(lambda _: logger.debug("decode_done"))
|
||||
self.has_started = True
|
||||
|
||||
async def recv(self): # type: ignore
|
||||
try:
|
||||
if self.readyState != "live":
|
||||
raise MediaStreamError
|
||||
|
||||
if not self.event_handler.channel_set.is_set():
|
||||
await self.event_handler.channel_set.wait()
|
||||
if current_channel.get() != self.event_handler.channel:
|
||||
current_channel.set(self.event_handler.channel)
|
||||
await self.start()
|
||||
|
||||
frame = await self.queue.get()
|
||||
if isinstance(frame, CloseStream):
|
||||
cast(DataChannel, self.channel).send(
|
||||
create_message("end_stream", frame.msg)
|
||||
)
|
||||
self.stop()
|
||||
return
|
||||
logger.debug("frame %s", frame)
|
||||
|
||||
data_time = frame.time
|
||||
|
||||
if time.time() - self.last_timestamp > 10 * (
|
||||
self.event_handler.output_frame_size
|
||||
/ self.event_handler.output_sample_rate
|
||||
):
|
||||
self._start = None
|
||||
|
||||
# control playback rate
|
||||
if self._start is None:
|
||||
self._start = time.time() - data_time # type: ignore
|
||||
else:
|
||||
wait = self._start + data_time - time.time()
|
||||
await asyncio.sleep(wait)
|
||||
self.last_timestamp = time.time()
|
||||
return frame
|
||||
except Exception as e:
|
||||
logger.debug("exception %s", e)
|
||||
exec = traceback.format_exc()
|
||||
logger.debug("traceback %s", exec)
|
||||
|
||||
def stop(self):
|
||||
logger.debug("audio callback stop")
|
||||
self.thread_quit.set()
|
||||
super().stop()
|
||||
|
||||
|
||||
class ServerToClientVideo(VideoStreamTrack):
|
||||
"""
|
||||
This works for streaming input and output
|
||||
"""
|
||||
|
||||
kind = "video"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
event_handler: Callable,
|
||||
context: Context,
|
||||
channel: DataChannel | None = None,
|
||||
set_additional_outputs: Callable | None = None,
|
||||
fps: int = 30,
|
||||
) -> None:
|
||||
super().__init__() # don't forget this!
|
||||
self.event_handler = event_handler
|
||||
self.args_set = asyncio.Event()
|
||||
self.latest_args: str | list[Any] = "not_set"
|
||||
self.generator: Generator[Any, None, Any] | None = None
|
||||
self.channel = channel
|
||||
self.set_additional_outputs = set_additional_outputs
|
||||
self.fps = fps
|
||||
self.frame_ptime = 1.0 / fps
|
||||
self.context = context
|
||||
|
||||
def array_to_frame(self, array: np.ndarray) -> VideoFrame:
|
||||
return VideoFrame.from_ndarray(array, format="bgr24")
|
||||
|
||||
def set_channel(self, channel: DataChannel):
|
||||
self.channel = channel
|
||||
|
||||
def set_args(self, args: list[Any]):
|
||||
self.latest_args = list(args)
|
||||
self.args_set.set()
|
||||
|
||||
async def next_timestamp(self) -> Tuple[int, fractions.Fraction]:
|
||||
"""Override to control frame rate"""
|
||||
if self.readyState != "live":
|
||||
raise MediaStreamError
|
||||
|
||||
if hasattr(self, "_timestamp"):
|
||||
self._timestamp += int(self.frame_ptime * VIDEO_CLOCK_RATE)
|
||||
wait = self._start + (self._timestamp / VIDEO_CLOCK_RATE) - time.time()
|
||||
if wait > 0:
|
||||
await asyncio.sleep(wait)
|
||||
else:
|
||||
self._start = time.time()
|
||||
self._timestamp = 0
|
||||
return self._timestamp, VIDEO_TIME_BASE
|
||||
|
||||
async def recv(self): # type: ignore
|
||||
try:
|
||||
pts, time_base = await self.next_timestamp()
|
||||
await self.args_set.wait()
|
||||
current_channel.set(self.channel)
|
||||
current_context.set(self.context)
|
||||
if self.generator is None:
|
||||
self.generator = cast(
|
||||
Generator[Any, None, Any], self.event_handler(*self.latest_args)
|
||||
)
|
||||
try:
|
||||
next_array, outputs = split_output(next(self.generator))
|
||||
if isinstance(outputs, CloseStream):
|
||||
cast(DataChannel, self.channel).send(
|
||||
create_message("end_stream", outputs.msg)
|
||||
)
|
||||
self.stop()
|
||||
return
|
||||
if (
|
||||
isinstance(outputs, AdditionalOutputs)
|
||||
and self.set_additional_outputs
|
||||
and self.channel
|
||||
):
|
||||
self.set_additional_outputs(outputs)
|
||||
self.channel.send(create_message("fetch_output", []))
|
||||
except StopIteration:
|
||||
self.stop()
|
||||
return
|
||||
|
||||
next_frame = self.array_to_frame(next_array)
|
||||
next_frame.pts = pts
|
||||
next_frame.time_base = time_base
|
||||
return next_frame
|
||||
except Exception as e:
|
||||
logger.debug("exception %s", e)
|
||||
exec = traceback.format_exc()
|
||||
logger.debug("traceback %s %s", e, exec)
|
||||
if isinstance(e, WebRTCError):
|
||||
raise e
|
||||
else:
|
||||
raise WebRTCError(str(e)) from e
|
||||
|
||||
|
||||
class ServerToClientAudio(AudioStreamTrack):
|
||||
kind = "audio"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
event_handler: Callable,
|
||||
context: Context,
|
||||
channel: DataChannel | None = None,
|
||||
set_additional_outputs: Callable | None = None,
|
||||
) -> None:
|
||||
self.generator: Generator[Any, None, Any] | None = None
|
||||
self.event_handler = event_handler
|
||||
self.event_handler._clear_queue = self.clear_queue
|
||||
self.current_timestamp = 0
|
||||
self.latest_args: str | list[Any] = "not_set"
|
||||
self.args_set = threading.Event()
|
||||
self.queue = asyncio.Queue()
|
||||
self.thread_quit = asyncio.Event()
|
||||
self.channel = channel
|
||||
self.set_additional_outputs = set_additional_outputs
|
||||
self.has_started = False
|
||||
self._start: float | None = None
|
||||
self.context = context
|
||||
super().__init__()
|
||||
|
||||
def clear_queue(self):
|
||||
while not self.queue.empty():
|
||||
self.queue.get_nowait()
|
||||
self._start = None
|
||||
|
||||
def set_channel(self, channel: DataChannel):
|
||||
self.channel = channel
|
||||
|
||||
def set_args(self, args: list[Any]):
|
||||
self.latest_args = list(args)
|
||||
self.args_set.set()
|
||||
|
||||
def next(self) -> tuple[int, np.ndarray] | None:
|
||||
current_context.set(self.context)
|
||||
self.args_set.wait()
|
||||
current_channel.set(self.channel)
|
||||
if self.generator is None:
|
||||
self.generator = self.event_handler(*self.latest_args)
|
||||
if self.generator is not None:
|
||||
try:
|
||||
frame = next(self.generator)
|
||||
return frame
|
||||
except StopIteration:
|
||||
self.thread_quit.set()
|
||||
|
||||
async def start(self):
|
||||
if not self.has_started:
|
||||
loop = asyncio.get_running_loop()
|
||||
callable = functools.partial(loop.run_in_executor, None, self.next)
|
||||
asyncio.create_task(
|
||||
player_worker_decode(
|
||||
callable,
|
||||
self.queue,
|
||||
self.thread_quit,
|
||||
lambda: self.channel,
|
||||
self.set_additional_outputs,
|
||||
True,
|
||||
)
|
||||
)
|
||||
self.has_started = True
|
||||
|
||||
async def recv(self): # type: ignore
|
||||
try:
|
||||
if self.readyState != "live":
|
||||
raise MediaStreamError
|
||||
|
||||
await self.start()
|
||||
data = await self.queue.get()
|
||||
if isinstance(data, CloseStream):
|
||||
cast(DataChannel, self.channel).send(
|
||||
create_message("end_stream", data.msg)
|
||||
)
|
||||
self.stop()
|
||||
return
|
||||
if data is None:
|
||||
self.stop()
|
||||
return
|
||||
|
||||
data_time = data.time
|
||||
|
||||
# control playback rate
|
||||
if data_time is not None:
|
||||
if self._start is None:
|
||||
self._start = time.time() - data_time # type: ignore
|
||||
else:
|
||||
wait = self._start + data_time - time.time()
|
||||
await asyncio.sleep(wait)
|
||||
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.debug("exception %s", e)
|
||||
exec = traceback.format_exc()
|
||||
logger.debug("traceback %s", exec)
|
||||
if isinstance(e, WebRTCError):
|
||||
raise e
|
||||
else:
|
||||
raise WebRTCError(str(e)) from e
|
||||
|
||||
def stop(self):
|
||||
logger.debug("audio-to-client stop callback")
|
||||
self.thread_quit.set()
|
||||
super().stop()
|
||||
@@ -1,14 +1,20 @@
|
||||
import asyncio
|
||||
import fractions
|
||||
import functools
|
||||
import inspect
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import tempfile
|
||||
import traceback
|
||||
from contextvars import ContextVar
|
||||
from typing import Any, Callable, Protocol, TypedDict, cast
|
||||
from dataclasses import dataclass
|
||||
from typing import Any, Callable, Literal, Protocol, TypedDict, cast
|
||||
|
||||
import av
|
||||
import librosa
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
from pydub import AudioSegment
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -27,15 +33,51 @@ class AdditionalOutputs:
|
||||
self.args = args
|
||||
|
||||
|
||||
class CloseStream:
|
||||
def __init__(self, msg: str = "Stream closed") -> None:
|
||||
self.msg = msg
|
||||
|
||||
|
||||
class DataChannel(Protocol):
|
||||
def send(self, message: str) -> None: ...
|
||||
|
||||
|
||||
def create_message(
|
||||
type: Literal[
|
||||
"send_input",
|
||||
"end_stream",
|
||||
"fetch_output",
|
||||
"stopword",
|
||||
"error",
|
||||
"warning",
|
||||
"log",
|
||||
],
|
||||
data: list[Any] | str,
|
||||
) -> str:
|
||||
return json.dumps({"type": type, "data": data})
|
||||
|
||||
|
||||
current_channel: ContextVar[DataChannel | None] = ContextVar(
|
||||
"current_channel", default=None
|
||||
)
|
||||
|
||||
|
||||
@dataclass
|
||||
class Context:
|
||||
webrtc_id: str
|
||||
|
||||
|
||||
current_context: ContextVar[Context | None] = ContextVar(
|
||||
"current_context", default=None
|
||||
)
|
||||
|
||||
|
||||
def get_current_context() -> Context:
|
||||
if not (ctx := current_context.get()):
|
||||
raise RuntimeError("No context found")
|
||||
return ctx
|
||||
|
||||
|
||||
def _send_log(message: str, type: str) -> None:
|
||||
async def _send(channel: DataChannel) -> None:
|
||||
channel.send(
|
||||
@@ -48,7 +90,6 @@ def _send_log(message: str, type: str) -> None:
|
||||
)
|
||||
|
||||
if channel := current_channel.get():
|
||||
print("channel", channel)
|
||||
try:
|
||||
loop = asyncio.get_running_loop()
|
||||
asyncio.run_coroutine_threadsafe(_send(channel), loop)
|
||||
@@ -80,9 +121,13 @@ class WebRTCError(Exception):
|
||||
_send_log(message, "error")
|
||||
|
||||
|
||||
def split_output(data: tuple | Any) -> tuple[Any, AdditionalOutputs | None]:
|
||||
def split_output(
|
||||
data: tuple | Any,
|
||||
) -> tuple[Any, AdditionalOutputs | CloseStream | None]:
|
||||
if isinstance(data, AdditionalOutputs):
|
||||
return None, data
|
||||
if isinstance(data, CloseStream):
|
||||
return None, data
|
||||
if isinstance(data, tuple):
|
||||
# handle the bare audio case
|
||||
if 2 <= len(data) <= 3 and isinstance(data[1], np.ndarray):
|
||||
@@ -91,11 +136,11 @@ def split_output(data: tuple | Any) -> tuple[Any, AdditionalOutputs | None]:
|
||||
raise ValueError(
|
||||
"The tuple must have exactly two elements: the data and an instance of AdditionalOutputs."
|
||||
)
|
||||
if not isinstance(data[-1], AdditionalOutputs):
|
||||
if not isinstance(data[-1], (AdditionalOutputs, CloseStream)):
|
||||
raise ValueError(
|
||||
"The last element of the tuple must be an instance of AdditionalOutputs."
|
||||
)
|
||||
return data[0], cast(AdditionalOutputs, data[1])
|
||||
return data[0], cast(AdditionalOutputs | CloseStream, data[1])
|
||||
return data, None
|
||||
|
||||
|
||||
@@ -117,7 +162,7 @@ async def player_worker_decode(
|
||||
rate=sample_rate,
|
||||
frame_size=frame_size,
|
||||
)
|
||||
|
||||
first_sample_rate = None
|
||||
while not thread_quit.is_set():
|
||||
try:
|
||||
# Get next frame
|
||||
@@ -131,14 +176,21 @@ async def player_worker_decode(
|
||||
and channel()
|
||||
):
|
||||
set_additional_outputs(outputs)
|
||||
cast(DataChannel, channel()).send("change")
|
||||
cast(DataChannel, channel()).send(create_message("fetch_output", []))
|
||||
|
||||
if frame is None:
|
||||
if isinstance(outputs, CloseStream):
|
||||
await queue.put(outputs)
|
||||
if quit_on_none:
|
||||
await queue.put(None)
|
||||
break
|
||||
continue
|
||||
|
||||
if not isinstance(frame, tuple) and not isinstance(frame[1], np.ndarray):
|
||||
raise WebRTCError(
|
||||
"The frame must be a tuple containing a sample rate and a numpy array."
|
||||
)
|
||||
|
||||
if len(frame) == 2:
|
||||
sample_rate, audio_array = frame
|
||||
layout = "mono"
|
||||
@@ -152,23 +204,36 @@ async def player_worker_decode(
|
||||
layout, # type: ignore
|
||||
)
|
||||
format = "s16" if audio_array.dtype == "int16" else "fltp" # type: ignore
|
||||
if first_sample_rate is None:
|
||||
first_sample_rate = sample_rate
|
||||
|
||||
if format == "s16":
|
||||
audio_array = audio_to_float32((sample_rate, audio_array))
|
||||
|
||||
if first_sample_rate != sample_rate:
|
||||
audio_array = librosa.resample(
|
||||
audio_array, target_sr=first_sample_rate, orig_sr=sample_rate
|
||||
)
|
||||
|
||||
if audio_array.ndim == 1:
|
||||
audio_array = audio_array.reshape(1, -1)
|
||||
|
||||
# Convert to audio frame and
|
||||
|
||||
# Convert to audio frame and resample
|
||||
# This runs in the same timeout context
|
||||
frame = av.AudioFrame.from_ndarray( # type: ignore
|
||||
audio_array, # type: ignore
|
||||
format=format,
|
||||
format="fltp",
|
||||
layout=layout, # type: ignore
|
||||
)
|
||||
frame.sample_rate = sample_rate
|
||||
|
||||
frame.sample_rate = first_sample_rate
|
||||
for processed_frame in audio_resampler.resample(frame):
|
||||
processed_frame.pts = audio_samples
|
||||
processed_frame.time_base = audio_time_base
|
||||
audio_samples += processed_frame.samples
|
||||
await queue.put(processed_frame)
|
||||
logger.debug("Queue size utils.py: %s", queue.qsize())
|
||||
|
||||
if isinstance(outputs, CloseStream):
|
||||
await queue.put(outputs)
|
||||
except (TimeoutError, asyncio.TimeoutError):
|
||||
logger.warning(
|
||||
"Timeout in frame processing cycle after %s seconds - resetting", 60
|
||||
@@ -178,12 +243,15 @@ async def player_worker_decode(
|
||||
import traceback
|
||||
|
||||
exec = traceback.format_exc()
|
||||
logger.debug("traceback %s", exec)
|
||||
logger.error("Error processing frame: %s", str(e))
|
||||
continue
|
||||
print("traceback %s", exec)
|
||||
print("Error processing frame: %s", str(e))
|
||||
if isinstance(e, WebRTCError):
|
||||
raise e
|
||||
else:
|
||||
continue
|
||||
|
||||
|
||||
def audio_to_bytes(audio: tuple[int, np.ndarray]) -> bytes:
|
||||
def audio_to_bytes(audio: tuple[int, NDArray[np.int16 | np.float32]]) -> bytes:
|
||||
"""
|
||||
Convert an audio tuple containing sample rate and numpy array data into bytes.
|
||||
|
||||
@@ -217,7 +285,7 @@ def audio_to_bytes(audio: tuple[int, np.ndarray]) -> bytes:
|
||||
return audio_buffer.getvalue()
|
||||
|
||||
|
||||
def audio_to_file(audio: tuple[int, np.ndarray]) -> str:
|
||||
def audio_to_file(audio: tuple[int, NDArray[np.int16 | np.float32]]) -> str:
|
||||
"""
|
||||
Save an audio tuple containing sample rate and numpy array data to a file.
|
||||
|
||||
@@ -247,7 +315,9 @@ def audio_to_file(audio: tuple[int, np.ndarray]) -> str:
|
||||
return f.name
|
||||
|
||||
|
||||
def audio_to_float32(audio: tuple[int, np.ndarray]) -> np.ndarray:
|
||||
def audio_to_float32(
|
||||
audio: tuple[int, NDArray[np.int16 | np.float32]],
|
||||
) -> NDArray[np.float32]:
|
||||
"""
|
||||
Convert an audio tuple containing sample rate (int16) and numpy array data to float32.
|
||||
|
||||
@@ -273,41 +343,143 @@ def audio_to_float32(audio: tuple[int, np.ndarray]) -> np.ndarray:
|
||||
return audio[1].astype(np.float32) / 32768.0
|
||||
|
||||
|
||||
def aggregate_bytes_to_16bit(chunks_iterator):
|
||||
leftover = b"" # Store incomplete bytes between chunks
|
||||
def audio_to_int16(
|
||||
audio: tuple[int, NDArray[np.int16 | np.float32]],
|
||||
) -> NDArray[np.int16]:
|
||||
"""
|
||||
Convert an audio tuple containing sample rate and numpy array data to int16.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
audio : tuple[int, np.ndarray]
|
||||
A tuple containing:
|
||||
- sample_rate (int): The audio sample rate in Hz
|
||||
- data (np.ndarray): The audio data as a numpy array
|
||||
|
||||
Returns
|
||||
-------
|
||||
np.ndarray
|
||||
The audio data as a numpy array with dtype int16
|
||||
|
||||
Example
|
||||
-------
|
||||
>>> sample_rate = 44100
|
||||
>>> audio_data = np.array([0.1, -0.2, 0.3], dtype=np.float32) # Example audio samples
|
||||
>>> audio_tuple = (sample_rate, audio_data)
|
||||
>>> audio_int16 = audio_to_int16(audio_tuple)
|
||||
"""
|
||||
if audio[1].dtype == np.int16:
|
||||
return audio[1] # type: ignore
|
||||
elif audio[1].dtype == np.float32:
|
||||
# Convert float32 to int16 by scaling to the int16 range
|
||||
return (audio[1] * 32767.0).astype(np.int16)
|
||||
else:
|
||||
raise TypeError(f"Unsupported audio data type: {audio[1].dtype}")
|
||||
|
||||
|
||||
def aggregate_bytes_to_16bit(chunks_iterator):
|
||||
"""
|
||||
Aggregate bytes to 16-bit audio samples.
|
||||
|
||||
This function takes an iterator of chunks and aggregates them into 16-bit audio samples.
|
||||
It handles incomplete samples and combines them with the next chunk.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
chunks_iterator : Iterator[bytes]
|
||||
An iterator of byte chunks to aggregate
|
||||
|
||||
Returns
|
||||
-------
|
||||
Iterator[NDArray[np.int16]]
|
||||
"""
|
||||
leftover = b""
|
||||
for chunk in chunks_iterator:
|
||||
# Combine with any leftover bytes from previous chunk
|
||||
current_bytes = leftover + chunk
|
||||
|
||||
# Calculate complete samples
|
||||
n_complete_samples = len(current_bytes) // 2 # int16 = 2 bytes
|
||||
n_complete_samples = len(current_bytes) // 2
|
||||
bytes_to_process = n_complete_samples * 2
|
||||
|
||||
# Split into complete samples and leftover
|
||||
to_process = current_bytes[:bytes_to_process]
|
||||
leftover = current_bytes[bytes_to_process:]
|
||||
|
||||
if to_process: # Only yield if we have complete samples
|
||||
if to_process:
|
||||
audio_array = np.frombuffer(to_process, dtype=np.int16).reshape(1, -1)
|
||||
yield audio_array
|
||||
|
||||
|
||||
async def async_aggregate_bytes_to_16bit(chunks_iterator):
|
||||
leftover = b"" # Store incomplete bytes between chunks
|
||||
"""
|
||||
Aggregate bytes to 16-bit audio samples.
|
||||
|
||||
This function takes an iterator of chunks and aggregates them into 16-bit audio samples.
|
||||
It handles incomplete samples and combines them with the next chunk.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
chunks_iterator : Iterator[bytes]
|
||||
An iterator of byte chunks to aggregate
|
||||
|
||||
Returns
|
||||
-------
|
||||
Iterator[NDArray[np.int16]]
|
||||
An iterator of 16-bit audio samples
|
||||
"""
|
||||
leftover = b""
|
||||
|
||||
async for chunk in chunks_iterator:
|
||||
# Combine with any leftover bytes from previous chunk
|
||||
current_bytes = leftover + chunk
|
||||
|
||||
# Calculate complete samples
|
||||
n_complete_samples = len(current_bytes) // 2 # int16 = 2 bytes
|
||||
n_complete_samples = len(current_bytes) // 2
|
||||
bytes_to_process = n_complete_samples * 2
|
||||
|
||||
# Split into complete samples and leftover
|
||||
to_process = current_bytes[:bytes_to_process]
|
||||
leftover = current_bytes[bytes_to_process:]
|
||||
|
||||
if to_process: # Only yield if we have complete samples
|
||||
if to_process:
|
||||
audio_array = np.frombuffer(to_process, dtype=np.int16).reshape(1, -1)
|
||||
yield audio_array
|
||||
|
||||
|
||||
def webrtc_error_handler(func):
|
||||
"""Decorator to catch exceptions and raise WebRTCError with stacktrace."""
|
||||
|
||||
@functools.wraps(func)
|
||||
async def async_wrapper(*args, **kwargs):
|
||||
try:
|
||||
return await func(*args, **kwargs)
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
if isinstance(e, WebRTCError):
|
||||
raise e
|
||||
else:
|
||||
raise WebRTCError(str(e)) from e
|
||||
|
||||
@functools.wraps(func)
|
||||
def sync_wrapper(*args, **kwargs):
|
||||
try:
|
||||
return func(*args, **kwargs)
|
||||
except Exception as e:
|
||||
traceback.print_exc()
|
||||
if isinstance(e, WebRTCError):
|
||||
raise e
|
||||
else:
|
||||
raise WebRTCError(str(e)) from e
|
||||
|
||||
return async_wrapper if inspect.iscoroutinefunction(func) else sync_wrapper
|
||||
|
||||
|
||||
async def wait_for_item(queue: asyncio.Queue, timeout: float = 0.1) -> Any:
|
||||
"""
|
||||
Wait for an item from an asyncio.Queue with a timeout.
|
||||
|
||||
This function attempts to retrieve an item from the queue using asyncio.wait_for.
|
||||
If the timeout is reached, it returns None.
|
||||
|
||||
This is useful to avoid blocking `emit` when the queue is empty.
|
||||
"""
|
||||
|
||||
try:
|
||||
return await asyncio.wait_for(queue.get(), timeout=timeout)
|
||||
except (TimeoutError, asyncio.TimeoutError):
|
||||
return None
|
||||
378
backend/fastrtc/webrtc.py
Normal file
378
backend/fastrtc/webrtc.py
Normal file
@@ -0,0 +1,378 @@
|
||||
"""gr.WebRTC() component."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import logging
|
||||
from collections.abc import Callable
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Concatenate,
|
||||
Iterable,
|
||||
Literal,
|
||||
ParamSpec,
|
||||
Sequence,
|
||||
TypeVar,
|
||||
cast,
|
||||
)
|
||||
|
||||
from gradio import wasm_utils
|
||||
from gradio.components.base import Component, server
|
||||
from gradio_client import handle_file
|
||||
|
||||
from .tracks import (
|
||||
AudioVideoStreamHandlerImpl,
|
||||
StreamHandler,
|
||||
StreamHandlerBase,
|
||||
StreamHandlerImpl,
|
||||
VideoEventHandler,
|
||||
VideoStreamHandler,
|
||||
)
|
||||
from .webrtc_connection_mixin import WebRTCConnectionMixin
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from gradio.blocks import Block
|
||||
from gradio.components import Timer
|
||||
|
||||
if wasm_utils.IS_WASM:
|
||||
raise ValueError("Not supported in gradio-lite!")
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# For the return type
|
||||
R = TypeVar("R")
|
||||
# For the parameter specification
|
||||
P = ParamSpec("P")
|
||||
|
||||
|
||||
class WebRTC(Component, WebRTCConnectionMixin):
|
||||
"""
|
||||
Creates a video component that can be used to upload/record videos (as an input) or display videos (as an output).
|
||||
For the video to be playable in the browser it must have a compatible container and codec combination. Allowed
|
||||
combinations are .mp4 with h264 codec, .ogg with theora codec, and .webm with vp9 codec. If the component detects
|
||||
that the output video would not be playable in the browser it will attempt to convert it to a playable mp4 video.
|
||||
If the conversion fails, the original video is returned.
|
||||
|
||||
Demos: video_identity_2
|
||||
"""
|
||||
|
||||
EVENTS = ["tick", "state_change"]
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
value: None = None,
|
||||
height: int | str | None = None,
|
||||
width: int | str | None = None,
|
||||
label: str | None = None,
|
||||
every: Timer | float | None = None,
|
||||
inputs: Component | Sequence[Component] | set[Component] | None = None,
|
||||
show_label: bool | None = None,
|
||||
container: bool = True,
|
||||
scale: int | None = None,
|
||||
min_width: int = 160,
|
||||
interactive: bool | None = None,
|
||||
visible: bool = True,
|
||||
elem_id: str | None = None,
|
||||
elem_classes: list[str] | str | None = None,
|
||||
render: bool = True,
|
||||
key: int | str | None = None,
|
||||
mirror_webcam: bool = True,
|
||||
rtc_configuration: dict[str, Any] | None = None,
|
||||
track_constraints: dict[str, Any] | None = None,
|
||||
time_limit: float | None = None,
|
||||
mode: Literal["send-receive", "receive", "send"] = "send-receive",
|
||||
modality: Literal["video", "audio", "audio-video"] = "video",
|
||||
rtp_params: dict[str, Any] | None = None,
|
||||
icon: str | None = None,
|
||||
icon_button_color: str | None = None,
|
||||
pulse_color: str | None = None,
|
||||
icon_radius: int | None = None,
|
||||
button_labels: dict | None = None,
|
||||
):
|
||||
"""
|
||||
Parameters:
|
||||
value: path or URL for the default value that WebRTC component is going to take. Can also be a tuple consisting of (video filepath, subtitle filepath). If a subtitle file is provided, it should be of type .srt or .vtt. Or can be callable, in which case the function will be called whenever the app loads to set the initial value of the component.
|
||||
format: the file extension with which to save video, such as 'avi' or 'mp4'. This parameter applies both when this component is used as an input to determine which file format to convert user-provided video to, and when this component is used as an output to determine the format of video returned to the user. If None, no file format conversion is done and the video is kept as is. Use 'mp4' to ensure browser playability.
|
||||
height: The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.
|
||||
width: The width of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.
|
||||
label: the label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.
|
||||
every: continously calls `value` to recalculate it if `value` is a function (has no effect otherwise). Can provide a Timer whose tick resets `value`, or a float that provides the regular interval for the reset Timer.
|
||||
inputs: components that are used as inputs to calculate `value` if `value` is a function (has no effect otherwise). `value` is recalculated any time the inputs change.
|
||||
show_label: if True, will display label.
|
||||
container: if True, will place the component in a container - providing some extra padding around the border.
|
||||
scale: relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.
|
||||
min_width: minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.
|
||||
interactive: if True, will allow users to upload a video; if False, can only be used to display videos. If not provided, this is inferred based on whether the component is used as an input or output.
|
||||
visible: if False, component will be hidden.
|
||||
elem_id: an optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.
|
||||
elem_classes: an optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.
|
||||
render: if False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.
|
||||
key: if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.
|
||||
mirror_webcam: if True webcam will be mirrored. Default is True.
|
||||
rtc_configuration: WebRTC configuration options. See https://developer.mozilla.org/en-US/docs/Web/API/RTCPeerConnection/RTCPeerConnection . If running the demo on a remote server, you will need to specify a rtc_configuration. See https://freddyaboulton.github.io/gradio-webrtc/deployment/
|
||||
track_constraints: Media track constraints for WebRTC. For example, to set video height, width use {"width": {"exact": 800}, "height": {"exact": 600}, "aspectRatio": {"exact": 1.33333}}
|
||||
time_limit: Maximum duration in seconds for recording.
|
||||
mode: WebRTC mode - "send-receive", "receive", or "send".
|
||||
modality: Type of media - "video" or "audio".
|
||||
rtp_params: See https://developer.mozilla.org/en-US/docs/Web/API/RTCRtpSender/setParameters. If you are changing the video resolution, you can set this to {"degradationPreference": "maintain-framerate"} to keep the frame rate consistent.
|
||||
icon: Icon to display on the button instead of the wave animation. The icon should be a path/url to a .svg/.png/.jpeg file.
|
||||
icon_button_color: Color of the icon button. Default is var(--color-accent) of the demo theme.
|
||||
pulse_color: Color of the pulse animation. Default is var(--color-accent) of the demo theme.
|
||||
button_labels: Text to display on the audio or video start, stop, waiting buttons. Dict with keys "start", "stop", "waiting" mapping to the text to display on the buttons.
|
||||
icon_radius: Border radius of the icon button expressed as a percentage of the button size. Default is 50%
|
||||
"""
|
||||
WebRTCConnectionMixin.__init__(self)
|
||||
self.time_limit = time_limit
|
||||
self.height = height
|
||||
self.width = width
|
||||
self.mirror_webcam = mirror_webcam
|
||||
self.concurrency_limit = 1
|
||||
self.rtc_configuration = rtc_configuration
|
||||
self.mode = mode
|
||||
self.modality = modality
|
||||
self.icon_button_color = icon_button_color
|
||||
self.icon_radius = icon_radius
|
||||
self.pulse_color = pulse_color
|
||||
self.rtp_params = rtp_params or {}
|
||||
self.button_labels = {
|
||||
"start": "",
|
||||
"stop": "",
|
||||
"waiting": "",
|
||||
**(button_labels or {}),
|
||||
}
|
||||
if track_constraints is None and modality == "audio":
|
||||
track_constraints = {
|
||||
"echoCancellation": True,
|
||||
"noiseSuppression": {"exact": True},
|
||||
"autoGainControl": {"exact": True},
|
||||
"sampleRate": {"ideal": 24000},
|
||||
"sampleSize": {"ideal": 16},
|
||||
"channelCount": {"exact": 1},
|
||||
}
|
||||
if track_constraints is None and modality == "video":
|
||||
track_constraints = {
|
||||
"facingMode": "user",
|
||||
"width": {"ideal": 500},
|
||||
"height": {"ideal": 500},
|
||||
"frameRate": {"ideal": 30},
|
||||
}
|
||||
if track_constraints is None and modality == "audio-video":
|
||||
track_constraints = {
|
||||
"video": {
|
||||
"facingMode": "user",
|
||||
"width": {"ideal": 500},
|
||||
"height": {"ideal": 500},
|
||||
"frameRate": {"ideal": 30},
|
||||
},
|
||||
"audio": {
|
||||
"echoCancellation": True,
|
||||
"noiseSuppression": {"exact": True},
|
||||
"autoGainControl": {"exact": True},
|
||||
"sampleRate": {"ideal": 24000},
|
||||
"sampleSize": {"ideal": 16},
|
||||
"channelCount": {"exact": 1},
|
||||
},
|
||||
}
|
||||
self.track_constraints = track_constraints
|
||||
self.event_handler: Callable | StreamHandler | None = None
|
||||
super().__init__(
|
||||
label=label,
|
||||
every=every,
|
||||
inputs=inputs,
|
||||
show_label=show_label,
|
||||
container=container,
|
||||
scale=scale,
|
||||
min_width=min_width,
|
||||
interactive=interactive,
|
||||
visible=visible,
|
||||
elem_id=elem_id,
|
||||
elem_classes=elem_classes,
|
||||
render=render,
|
||||
key=key,
|
||||
value=value,
|
||||
)
|
||||
# need to do this here otherwise the proxy_url is not set
|
||||
self.icon = (
|
||||
icon if not icon else cast(dict, self.serve_static_file(icon)).get("url")
|
||||
)
|
||||
|
||||
def preprocess(self, payload: str) -> str:
|
||||
"""
|
||||
Parameters:
|
||||
payload: An instance of VideoData containing the video and subtitle files.
|
||||
Returns:
|
||||
Passes the uploaded video as a `str` filepath or URL whose extension can be modified by `format`.
|
||||
"""
|
||||
return payload
|
||||
|
||||
def postprocess(self, value: Any) -> str:
|
||||
"""
|
||||
Parameters:
|
||||
value: Expects a {str} or {pathlib.Path} filepath to a video which is displayed, or a {Tuple[str | pathlib.Path, str | pathlib.Path | None]} where the first element is a filepath to a video and the second element is an optional filepath to a subtitle file.
|
||||
Returns:
|
||||
VideoData object containing the video and subtitle files.
|
||||
"""
|
||||
return value
|
||||
|
||||
def on_additional_outputs(
|
||||
self,
|
||||
fn: Callable[Concatenate[P], R],
|
||||
inputs: Block | Sequence[Block] | set[Block] | None = None,
|
||||
outputs: Block | Sequence[Block] | set[Block] | None = None,
|
||||
js: str | None = None,
|
||||
concurrency_limit: int | None | Literal["default"] = "default",
|
||||
concurrency_id: str | None = None,
|
||||
show_progress: Literal["full", "minimal", "hidden"] = "full",
|
||||
queue: bool = True,
|
||||
):
|
||||
inputs = inputs or []
|
||||
if inputs and not isinstance(inputs, Iterable):
|
||||
inputs = [inputs]
|
||||
inputs = list(inputs)
|
||||
|
||||
async def handler(webrtc_id: str, *args):
|
||||
print("webrtc_id", webrtc_id)
|
||||
async for next_outputs in self.output_stream(webrtc_id):
|
||||
yield fn(*args, *next_outputs.args) # type: ignore
|
||||
|
||||
return self.state_change( # type: ignore
|
||||
fn=handler,
|
||||
inputs=[self] + cast(list, inputs),
|
||||
outputs=outputs,
|
||||
js=js,
|
||||
concurrency_limit=concurrency_limit,
|
||||
concurrency_id=concurrency_id,
|
||||
show_progress="minimal",
|
||||
queue=queue,
|
||||
trigger_mode="once",
|
||||
)
|
||||
|
||||
def stream(
|
||||
self,
|
||||
fn: (
|
||||
Callable[..., Any]
|
||||
| StreamHandlerImpl
|
||||
| AudioVideoStreamHandlerImpl
|
||||
| VideoEventHandler
|
||||
| VideoStreamHandler
|
||||
| None
|
||||
) = None,
|
||||
inputs: Block | Sequence[Block] | set[Block] | None = None,
|
||||
outputs: Block | Sequence[Block] | set[Block] | None = None,
|
||||
js: str | None = None,
|
||||
concurrency_limit: int | None | Literal["default"] = "default",
|
||||
concurrency_id: str | None = None,
|
||||
time_limit: float | None = None,
|
||||
trigger: Callable | None = None,
|
||||
send_input_on: Literal["submit", "change"] = "change",
|
||||
):
|
||||
from gradio.blocks import Block
|
||||
|
||||
if inputs is None:
|
||||
inputs = []
|
||||
if outputs is None:
|
||||
outputs = []
|
||||
if isinstance(inputs, Block):
|
||||
inputs = [inputs]
|
||||
if isinstance(outputs, Block):
|
||||
outputs = [outputs]
|
||||
|
||||
self.concurrency_limit = cast(
|
||||
int, (1 if concurrency_limit in ["default", None] else concurrency_limit)
|
||||
)
|
||||
self.event_handler = fn # type: ignore
|
||||
self.time_limit = time_limit
|
||||
|
||||
if (
|
||||
self.mode == "send-receive"
|
||||
and self.modality in ["audio", "audio-video"]
|
||||
and not isinstance(self.event_handler, StreamHandlerBase)
|
||||
):
|
||||
raise ValueError(
|
||||
"In the send-receive mode for audio, the event handler must be an instance of StreamHandlerBase."
|
||||
)
|
||||
|
||||
if self.mode == "send-receive" or self.mode == "send":
|
||||
if cast(list[Block], inputs)[0] != self:
|
||||
raise ValueError(
|
||||
"In the webrtc stream event, the first input component must be the WebRTC component."
|
||||
)
|
||||
|
||||
if (
|
||||
len(cast(list[Block], outputs)) != 1
|
||||
and cast(list[Block], outputs)[0] != self
|
||||
):
|
||||
raise ValueError(
|
||||
"In the webrtc stream event, the only output component must be the WebRTC component."
|
||||
)
|
||||
for input_component in inputs[1:]: # type: ignore
|
||||
if hasattr(input_component, "change") and send_input_on == "change":
|
||||
input_component.change( # type: ignore
|
||||
self.set_input,
|
||||
inputs=inputs,
|
||||
outputs=None,
|
||||
concurrency_id=concurrency_id,
|
||||
concurrency_limit=None,
|
||||
time_limit=None,
|
||||
js=js,
|
||||
)
|
||||
if hasattr(input_component, "submit") and send_input_on == "submit":
|
||||
input_component.submit( # type: ignore
|
||||
self.set_input,
|
||||
inputs=inputs,
|
||||
outputs=None,
|
||||
concurrency_id=concurrency_id,
|
||||
)
|
||||
return self.tick( # type: ignore
|
||||
self.set_input,
|
||||
inputs=inputs,
|
||||
outputs=None,
|
||||
concurrency_id=concurrency_id,
|
||||
concurrency_limit=None,
|
||||
time_limit=None,
|
||||
js=js,
|
||||
)
|
||||
elif self.mode == "receive":
|
||||
if isinstance(inputs, list) and self in cast(list[Block], inputs):
|
||||
raise ValueError(
|
||||
"In the receive mode stream event, the WebRTC component cannot be an input."
|
||||
)
|
||||
if (
|
||||
len(cast(list[Block], outputs)) != 1
|
||||
and cast(list[Block], outputs)[0] != self
|
||||
):
|
||||
raise ValueError(
|
||||
"In the receive mode stream, the only output component must be the WebRTC component."
|
||||
)
|
||||
if trigger is None:
|
||||
raise ValueError(
|
||||
"In the receive mode stream event, the trigger parameter must be provided"
|
||||
)
|
||||
trigger(lambda: "start_webrtc_stream", inputs=None, outputs=self)
|
||||
self.tick( # type: ignore
|
||||
self.set_input,
|
||||
inputs=[self] + list(inputs),
|
||||
outputs=None,
|
||||
concurrency_id=concurrency_id,
|
||||
)
|
||||
|
||||
@server
|
||||
async def offer(self, body):
|
||||
return await self.handle_offer(
|
||||
body, self.set_additional_outputs(body["webrtc_id"])
|
||||
)
|
||||
|
||||
def example_payload(self) -> Any:
|
||||
return {
|
||||
"video": handle_file(
|
||||
"https://github.com/gradio-app/gradio/raw/main/demo/video_component/files/world.mp4"
|
||||
),
|
||||
}
|
||||
|
||||
def example_value(self) -> Any:
|
||||
return "https://github.com/gradio-app/gradio/raw/main/demo/video_component/files/world.mp4"
|
||||
|
||||
def api_info(self) -> Any:
|
||||
return {"type": "number"}
|
||||
412
backend/fastrtc/webrtc_connection_mixin.py
Normal file
412
backend/fastrtc/webrtc_connection_mixin.py
Normal file
@@ -0,0 +1,412 @@
|
||||
"""Mixin for handling WebRTC connections."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import asyncio
|
||||
import inspect
|
||||
import logging
|
||||
from collections import defaultdict
|
||||
from collections.abc import Callable
|
||||
from dataclasses import dataclass, field
|
||||
from typing import (
|
||||
AsyncGenerator,
|
||||
Literal,
|
||||
ParamSpec,
|
||||
TypeVar,
|
||||
cast,
|
||||
)
|
||||
|
||||
from aiortc import (
|
||||
RTCIceCandidate,
|
||||
RTCPeerConnection,
|
||||
RTCSessionDescription,
|
||||
)
|
||||
from aiortc.contrib.media import MediaRelay # type: ignore
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
from fastrtc.tracks import (
|
||||
AudioCallback,
|
||||
HandlerType,
|
||||
ServerToClientAudio,
|
||||
ServerToClientVideo,
|
||||
StreamHandlerBase,
|
||||
StreamHandlerImpl,
|
||||
VideoCallback,
|
||||
VideoEventHandler,
|
||||
VideoStreamHandler,
|
||||
VideoStreamHandler_,
|
||||
)
|
||||
from fastrtc.utils import (
|
||||
AdditionalOutputs,
|
||||
Context,
|
||||
create_message,
|
||||
webrtc_error_handler,
|
||||
)
|
||||
|
||||
Track = (
|
||||
VideoCallback
|
||||
| VideoStreamHandler_
|
||||
| AudioCallback
|
||||
| ServerToClientAudio
|
||||
| ServerToClientVideo
|
||||
)
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
# For the return type
|
||||
R = TypeVar("R")
|
||||
# For the parameter specification
|
||||
P = ParamSpec("P")
|
||||
|
||||
|
||||
@dataclass
|
||||
class OutputQueue:
|
||||
queue: asyncio.Queue[AdditionalOutputs] = field(default_factory=asyncio.Queue)
|
||||
quit: asyncio.Event = field(default_factory=asyncio.Event)
|
||||
|
||||
|
||||
class WebRTCConnectionMixin:
|
||||
def __init__(self):
|
||||
self.pcs: dict[str, RTCPeerConnection] = {}
|
||||
self.relay = MediaRelay()
|
||||
self.connections = defaultdict(list)
|
||||
self.data_channels = {}
|
||||
self.additional_outputs = defaultdict(OutputQueue)
|
||||
self.handlers = {}
|
||||
self.connection_timeouts = defaultdict(asyncio.Event)
|
||||
# These attributes should be set by subclasses:
|
||||
self.concurrency_limit: int | None
|
||||
self.event_handler: HandlerType | None
|
||||
self.time_limit: float | None
|
||||
self.modality: Literal["video", "audio", "audio-video"]
|
||||
self.mode: Literal["send", "receive", "send-receive"]
|
||||
|
||||
@staticmethod
|
||||
async def wait_for_time_limit(pc: RTCPeerConnection, time_limit: float):
|
||||
await asyncio.sleep(time_limit)
|
||||
await pc.close()
|
||||
|
||||
async def connection_timeout(
|
||||
self,
|
||||
pc: RTCPeerConnection,
|
||||
webrtc_id: str,
|
||||
time_limit: float,
|
||||
):
|
||||
try:
|
||||
await asyncio.wait_for(
|
||||
self.connection_timeouts[webrtc_id].wait(), time_limit
|
||||
)
|
||||
except (asyncio.TimeoutError, TimeoutError):
|
||||
await pc.close()
|
||||
self.connection_timeouts[webrtc_id].clear()
|
||||
self.clean_up(webrtc_id)
|
||||
|
||||
def clean_up(self, webrtc_id: str):
|
||||
self.handlers.pop(webrtc_id, None)
|
||||
self.connection_timeouts.pop(webrtc_id, None)
|
||||
connection = self.connections.pop(webrtc_id, [])
|
||||
for conn in connection:
|
||||
if isinstance(conn, AudioCallback):
|
||||
if inspect.iscoroutinefunction(conn.event_handler.shutdown):
|
||||
asyncio.create_task(conn.event_handler.shutdown())
|
||||
conn.event_handler.reset()
|
||||
else:
|
||||
conn.event_handler.shutdown()
|
||||
conn.event_handler.reset()
|
||||
output = self.additional_outputs.pop(webrtc_id, None)
|
||||
if output:
|
||||
logger.debug("setting quit for webrtc id %s", webrtc_id)
|
||||
output.quit.set()
|
||||
self.data_channels.pop(webrtc_id, None)
|
||||
return connection
|
||||
|
||||
def set_input(self, webrtc_id: str, *args):
|
||||
if webrtc_id in self.connections:
|
||||
for conn in self.connections[webrtc_id]:
|
||||
conn.set_args(list(args))
|
||||
|
||||
async def output_stream(
|
||||
self, webrtc_id: str
|
||||
) -> AsyncGenerator[AdditionalOutputs, None]:
|
||||
outputs = self.additional_outputs[webrtc_id]
|
||||
while not outputs.quit.is_set():
|
||||
try:
|
||||
yield await asyncio.wait_for(outputs.queue.get(), 10)
|
||||
except (asyncio.TimeoutError, TimeoutError):
|
||||
logger.debug("Timeout waiting for output")
|
||||
|
||||
async def fetch_latest_output(self, webrtc_id: str) -> AdditionalOutputs:
|
||||
outputs = self.additional_outputs[webrtc_id]
|
||||
return await asyncio.wait_for(outputs.queue.get(), 10)
|
||||
|
||||
def set_additional_outputs(
|
||||
self, webrtc_id: str
|
||||
) -> Callable[[AdditionalOutputs], None]:
|
||||
def set_outputs(outputs: AdditionalOutputs):
|
||||
self.additional_outputs[webrtc_id].queue.put_nowait(outputs)
|
||||
|
||||
return set_outputs
|
||||
|
||||
async def handle_offer(self, body, set_outputs):
|
||||
logger.debug("Starting to handle offer")
|
||||
logger.debug("Offer body %s", body)
|
||||
|
||||
if body.get("type") == "ice-candidate" and "candidate" in body:
|
||||
webrtc_id = body.get("webrtc_id")
|
||||
if webrtc_id not in self.pcs:
|
||||
logger.warning(
|
||||
f"Received ICE candidate for unknown connection: {webrtc_id}"
|
||||
)
|
||||
return JSONResponse(
|
||||
status_code=200,
|
||||
content={
|
||||
"status": "failed",
|
||||
"meta": {"error": "unknown_connection"},
|
||||
},
|
||||
)
|
||||
|
||||
pc = self.pcs[webrtc_id]
|
||||
if pc.connectionState != "closed":
|
||||
try:
|
||||
# Parse the candidate string from the browser
|
||||
candidate_str = body["candidate"].get("candidate", "")
|
||||
|
||||
# Example format: "candidate:2393089663 1 udp 2122260223 192.168.86.60 63692 typ host generation 0 ufrag LkZb network-id 1 network-cost 10"
|
||||
# We need to parse this string to extract the required fields
|
||||
|
||||
# Parse the candidate string
|
||||
parts = candidate_str.split()
|
||||
if len(parts) >= 10 and parts[0].startswith("candidate:"):
|
||||
foundation = parts[0].split(":", 1)[1]
|
||||
component = int(parts[1])
|
||||
protocol = parts[2]
|
||||
priority = int(parts[3])
|
||||
ip = parts[4]
|
||||
port = int(parts[5])
|
||||
# Find the candidate type
|
||||
typ_index = parts.index("typ")
|
||||
candidate_type = parts[typ_index + 1]
|
||||
|
||||
# Create the RTCIceCandidate object
|
||||
ice_candidate = RTCIceCandidate(
|
||||
component=component,
|
||||
foundation=foundation,
|
||||
ip=ip,
|
||||
port=port,
|
||||
priority=priority,
|
||||
protocol=protocol,
|
||||
type=candidate_type,
|
||||
sdpMid=body["candidate"].get("sdpMid"),
|
||||
sdpMLineIndex=body["candidate"].get("sdpMLineIndex"),
|
||||
)
|
||||
|
||||
# Add the candidate to the peer connection
|
||||
await pc.addIceCandidate(ice_candidate)
|
||||
logger.debug(f"Added ICE candidate for {webrtc_id}")
|
||||
return JSONResponse(
|
||||
status_code=200, content={"status": "success"}
|
||||
)
|
||||
else:
|
||||
logger.error(f"Invalid candidate format: {candidate_str}")
|
||||
return JSONResponse(
|
||||
status_code=200,
|
||||
content={
|
||||
"status": "failed",
|
||||
"meta": {"error": "invalid_candidate_format"},
|
||||
},
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Error adding ICE candidate: {e}", exc_info=True)
|
||||
return JSONResponse(
|
||||
status_code=200,
|
||||
content={"status": "failed", "meta": {"error": str(e)}},
|
||||
)
|
||||
|
||||
return JSONResponse(
|
||||
status_code=200,
|
||||
content={"status": "failed", "meta": {"error": "connection_closed"}},
|
||||
)
|
||||
|
||||
if len(self.connections) >= cast(int, self.concurrency_limit):
|
||||
return JSONResponse(
|
||||
status_code=200,
|
||||
content={
|
||||
"status": "failed",
|
||||
"meta": {
|
||||
"error": "concurrency_limit_reached",
|
||||
"limit": self.concurrency_limit,
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
offer = RTCSessionDescription(sdp=body["sdp"], type=body["type"])
|
||||
|
||||
pc = RTCPeerConnection()
|
||||
self.pcs[body["webrtc_id"]] = pc
|
||||
|
||||
if isinstance(self.event_handler, StreamHandlerBase):
|
||||
handler = self.event_handler.copy()
|
||||
handler.emit = webrtc_error_handler(handler.emit) # type: ignore
|
||||
handler.receive = webrtc_error_handler(handler.receive) # type: ignore
|
||||
handler.start_up = webrtc_error_handler(handler.start_up) # type: ignore
|
||||
handler.shutdown = webrtc_error_handler(handler.shutdown) # type: ignore
|
||||
if hasattr(handler, "video_receive"):
|
||||
handler.video_receive = webrtc_error_handler(handler.video_receive) # type: ignore
|
||||
if hasattr(handler, "video_emit"):
|
||||
handler.video_emit = webrtc_error_handler(handler.video_emit) # type: ignore
|
||||
elif isinstance(self.event_handler, VideoStreamHandler):
|
||||
self.event_handler.callable = cast(
|
||||
VideoEventHandler, webrtc_error_handler(self.event_handler.callable)
|
||||
)
|
||||
handler = self.event_handler
|
||||
else:
|
||||
handler = webrtc_error_handler(cast(Callable, self.event_handler))
|
||||
|
||||
self.handlers[body["webrtc_id"]] = handler
|
||||
|
||||
@pc.on("iceconnectionstatechange")
|
||||
async def on_iceconnectionstatechange():
|
||||
logger.debug("ICE connection state change %s", pc.iceConnectionState)
|
||||
if pc.iceConnectionState == "failed":
|
||||
await pc.close()
|
||||
self.connections.pop(body["webrtc_id"], None)
|
||||
self.pcs.pop(body["webrtc_id"], None)
|
||||
|
||||
@pc.on("connectionstatechange")
|
||||
async def _():
|
||||
logger.debug("pc.connectionState %s", pc.connectionState)
|
||||
if pc.connectionState in ["failed", "closed"]:
|
||||
await pc.close()
|
||||
connection = self.clean_up(body["webrtc_id"])
|
||||
if connection:
|
||||
for conn in connection:
|
||||
conn.stop()
|
||||
self.pcs.pop(body["webrtc_id"], None)
|
||||
if pc.connectionState == "connected":
|
||||
self.connection_timeouts[body["webrtc_id"]].set()
|
||||
if self.time_limit is not None:
|
||||
asyncio.create_task(self.wait_for_time_limit(pc, self.time_limit))
|
||||
|
||||
@pc.on("track")
|
||||
def _(track):
|
||||
relay = MediaRelay()
|
||||
handler = self.handlers[body["webrtc_id"]]
|
||||
context = Context(webrtc_id=body["webrtc_id"])
|
||||
if self.modality == "video" and track.kind == "video":
|
||||
args = {}
|
||||
handler_ = handler
|
||||
if isinstance(handler, VideoStreamHandler):
|
||||
handler_ = handler.callable
|
||||
args["fps"] = handler.fps
|
||||
args["skip_frames"] = handler.skip_frames
|
||||
cb = VideoCallback(
|
||||
relay.subscribe(track),
|
||||
event_handler=cast(Callable, handler_),
|
||||
set_additional_outputs=set_outputs,
|
||||
mode=cast(Literal["send", "send-receive"], self.mode),
|
||||
context=context,
|
||||
**args,
|
||||
)
|
||||
elif self.modality == "audio-video" and track.kind == "video":
|
||||
cb = VideoStreamHandler_(
|
||||
relay.subscribe(track),
|
||||
event_handler=handler, # type: ignore
|
||||
set_additional_outputs=set_outputs,
|
||||
fps=cast(StreamHandlerImpl, handler).fps,
|
||||
context=context,
|
||||
)
|
||||
elif self.modality in ["audio", "audio-video"] and track.kind == "audio":
|
||||
eh = cast(StreamHandlerImpl, handler)
|
||||
eh._loop = asyncio.get_running_loop()
|
||||
cb = AudioCallback(
|
||||
relay.subscribe(track),
|
||||
event_handler=eh,
|
||||
set_additional_outputs=set_outputs,
|
||||
context=context,
|
||||
)
|
||||
else:
|
||||
if self.modality not in ["video", "audio", "audio-video"]:
|
||||
msg = "Modality must be either video, audio, or audio-video"
|
||||
else:
|
||||
msg = f"Unsupported track kind '{track.kind}' for modality '{self.modality}'"
|
||||
raise ValueError(msg)
|
||||
if body["webrtc_id"] not in self.connections:
|
||||
self.connections[body["webrtc_id"]] = []
|
||||
|
||||
self.connections[body["webrtc_id"]].append(cb)
|
||||
if body["webrtc_id"] in self.data_channels:
|
||||
for conn in self.connections[body["webrtc_id"]]:
|
||||
conn.set_channel(self.data_channels[body["webrtc_id"]])
|
||||
if self.mode == "send-receive":
|
||||
logger.debug("Adding track to peer connection %s", cb)
|
||||
pc.addTrack(cb)
|
||||
elif self.mode == "send":
|
||||
asyncio.create_task(cast(AudioCallback | VideoCallback, cb).start())
|
||||
|
||||
context = Context(webrtc_id=body["webrtc_id"])
|
||||
if self.mode == "receive":
|
||||
if self.modality == "video":
|
||||
if isinstance(self.event_handler, VideoStreamHandler):
|
||||
cb = ServerToClientVideo(
|
||||
cast(Callable, self.event_handler.callable),
|
||||
set_additional_outputs=set_outputs,
|
||||
fps=self.event_handler.fps,
|
||||
context=context,
|
||||
)
|
||||
else:
|
||||
cb = ServerToClientVideo(
|
||||
cast(Callable, self.event_handler),
|
||||
set_additional_outputs=set_outputs,
|
||||
context=context,
|
||||
)
|
||||
elif self.modality == "audio":
|
||||
cb = ServerToClientAudio(
|
||||
cast(Callable, self.event_handler),
|
||||
set_additional_outputs=set_outputs,
|
||||
context=context,
|
||||
)
|
||||
else:
|
||||
raise ValueError("Modality must be either video or audio")
|
||||
|
||||
logger.debug("Adding track to peer connection %s", cb)
|
||||
pc.addTrack(cb)
|
||||
self.connections[body["webrtc_id"]].append(cb)
|
||||
cb.on("ended", lambda: self.clean_up(body["webrtc_id"]))
|
||||
|
||||
@pc.on("datachannel")
|
||||
def _(channel):
|
||||
logger.debug(f"Data channel established: {channel.label}")
|
||||
|
||||
self.data_channels[body["webrtc_id"]] = channel
|
||||
|
||||
async def set_channel(webrtc_id: str):
|
||||
while not self.connections.get(webrtc_id):
|
||||
await asyncio.sleep(0.05)
|
||||
logger.debug("setting channel for webrtc id %s", webrtc_id)
|
||||
for conn in self.connections[webrtc_id]:
|
||||
conn.set_channel(channel)
|
||||
|
||||
asyncio.create_task(set_channel(body["webrtc_id"]))
|
||||
|
||||
@channel.on("message")
|
||||
def _(message):
|
||||
logger.debug(f"Received message: {message}")
|
||||
if channel.readyState == "open":
|
||||
channel.send(
|
||||
create_message("log", data=f"Server received: {message}")
|
||||
)
|
||||
|
||||
# handle offer
|
||||
await pc.setRemoteDescription(offer)
|
||||
asyncio.create_task(self.connection_timeout(pc, body["webrtc_id"], 30))
|
||||
# send answer
|
||||
answer = await pc.createAnswer()
|
||||
await pc.setLocalDescription(answer) # type: ignore
|
||||
logger.debug("done handling offer about to return")
|
||||
await asyncio.sleep(0.1)
|
||||
|
||||
return {
|
||||
"sdp": pc.localDescription.sdp,
|
||||
"type": pc.localDescription.type,
|
||||
}
|
||||
215
backend/fastrtc/websocket.py
Normal file
215
backend/fastrtc/websocket.py
Normal file
@@ -0,0 +1,215 @@
|
||||
import asyncio
|
||||
import audioop
|
||||
import base64
|
||||
import logging
|
||||
from typing import Any, Awaitable, Callable, Optional, cast
|
||||
|
||||
import anyio
|
||||
import librosa
|
||||
import numpy as np
|
||||
from fastapi import WebSocket
|
||||
|
||||
from .tracks import AsyncStreamHandler, StreamHandlerImpl
|
||||
from .utils import AdditionalOutputs, DataChannel, split_output
|
||||
|
||||
|
||||
class WebSocketDataChannel(DataChannel):
|
||||
def __init__(self, websocket: WebSocket, loop: asyncio.AbstractEventLoop):
|
||||
self.websocket = websocket
|
||||
self.loop = loop
|
||||
|
||||
def send(self, message: str) -> None:
|
||||
asyncio.run_coroutine_threadsafe(self.websocket.send_text(message), self.loop)
|
||||
|
||||
|
||||
logger = logging.getLogger(__file__)
|
||||
|
||||
|
||||
def convert_to_mulaw(
|
||||
audio_data: np.ndarray, original_rate: int, target_rate: int
|
||||
) -> bytes:
|
||||
"""Convert audio data to 8kHz mu-law format"""
|
||||
|
||||
if audio_data.dtype != np.float32:
|
||||
audio_data = audio_data.astype(np.float32) / 32768.0
|
||||
|
||||
if original_rate != target_rate:
|
||||
audio_data = librosa.resample(audio_data, orig_sr=original_rate, target_sr=8000)
|
||||
|
||||
audio_data = (audio_data * 32768).astype(np.int16)
|
||||
|
||||
return audioop.lin2ulaw(audio_data, 2) # type: ignore
|
||||
|
||||
|
||||
run_sync = anyio.to_thread.run_sync # type: ignore
|
||||
|
||||
|
||||
class WebSocketHandler:
|
||||
def __init__(
|
||||
self,
|
||||
stream_handler: StreamHandlerImpl,
|
||||
set_handler: Callable[[str, "WebSocketHandler"], Awaitable[None]],
|
||||
clean_up: Callable[[str], None],
|
||||
additional_outputs_factory: Callable[
|
||||
[str], Callable[[AdditionalOutputs], None]
|
||||
],
|
||||
):
|
||||
self.stream_handler = stream_handler
|
||||
self.stream_handler._clear_queue = self._clear_queue
|
||||
self.websocket: Optional[WebSocket] = None
|
||||
self._emit_task: Optional[asyncio.Task] = None
|
||||
self.stream_id: Optional[str] = None
|
||||
self.set_additional_outputs_factory = additional_outputs_factory
|
||||
self.set_additional_outputs: Callable[[AdditionalOutputs], None]
|
||||
self.set_handler = set_handler
|
||||
self.quit = asyncio.Event()
|
||||
self.clean_up = clean_up
|
||||
self.queue = asyncio.Queue()
|
||||
|
||||
def _clear_queue(self):
|
||||
old_queue = self.queue
|
||||
self.queue = asyncio.Queue()
|
||||
logger.debug("clearing queue")
|
||||
i = 0
|
||||
while not old_queue.empty():
|
||||
try:
|
||||
old_queue.get_nowait()
|
||||
i += 1
|
||||
except asyncio.QueueEmpty:
|
||||
break
|
||||
logger.debug("popped %d items from queue", i)
|
||||
|
||||
def set_args(self, args: list[Any]):
|
||||
self.stream_handler.set_args(args)
|
||||
|
||||
async def handle_websocket(self, websocket: WebSocket):
|
||||
await websocket.accept()
|
||||
loop = asyncio.get_running_loop()
|
||||
self.loop = loop
|
||||
self.websocket = websocket
|
||||
self.data_channel = WebSocketDataChannel(websocket, loop)
|
||||
self.stream_handler._loop = loop
|
||||
self.stream_handler.set_channel(self.data_channel)
|
||||
self._emit_task = asyncio.create_task(self._emit_loop())
|
||||
self._emit_to_queue_task = asyncio.create_task(self._emit_to_queue())
|
||||
if isinstance(self.stream_handler, AsyncStreamHandler):
|
||||
start_up = self.stream_handler.start_up()
|
||||
else:
|
||||
start_up = anyio.to_thread.run_sync(self.stream_handler.start_up) # type: ignore
|
||||
|
||||
self.start_up_task = asyncio.create_task(start_up)
|
||||
try:
|
||||
while not self.quit.is_set():
|
||||
message = await websocket.receive_json()
|
||||
|
||||
if message["event"] == "media":
|
||||
audio_payload = base64.b64decode(message["media"]["payload"])
|
||||
|
||||
audio_array = np.frombuffer(
|
||||
audioop.ulaw2lin(audio_payload, 2), dtype=np.int16
|
||||
)
|
||||
|
||||
if self.stream_handler.input_sample_rate != 8000:
|
||||
audio_array = audio_array.astype(np.float32) / 32768.0
|
||||
audio_array = librosa.resample(
|
||||
audio_array,
|
||||
orig_sr=8000,
|
||||
target_sr=self.stream_handler.input_sample_rate,
|
||||
)
|
||||
audio_array = (audio_array * 32768).astype(np.int16)
|
||||
if isinstance(self.stream_handler, AsyncStreamHandler):
|
||||
await self.stream_handler.receive(
|
||||
(self.stream_handler.input_sample_rate, audio_array)
|
||||
)
|
||||
else:
|
||||
await run_sync(
|
||||
self.stream_handler.receive,
|
||||
(self.stream_handler.input_sample_rate, audio_array),
|
||||
)
|
||||
|
||||
elif message["event"] == "start":
|
||||
if self.stream_handler.phone_mode:
|
||||
self.stream_id = cast(str, message["streamSid"])
|
||||
else:
|
||||
self.stream_id = cast(str, message["websocket_id"])
|
||||
self.set_additional_outputs = self.set_additional_outputs_factory(
|
||||
self.stream_id
|
||||
)
|
||||
await self.set_handler(self.stream_id, self)
|
||||
elif message["event"] == "stop":
|
||||
self.quit.set()
|
||||
self.clean_up(cast(str, self.stream_id))
|
||||
return
|
||||
elif message["event"] == "ping":
|
||||
await websocket.send_json({"event": "pong"})
|
||||
|
||||
except Exception as e:
|
||||
print(e)
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
logger.debug("Error in websocket handler %s", e)
|
||||
finally:
|
||||
if self._emit_task:
|
||||
self._emit_task.cancel()
|
||||
if self._emit_to_queue_task:
|
||||
self._emit_to_queue_task.cancel()
|
||||
if self.start_up_task:
|
||||
self.start_up_task.cancel()
|
||||
await websocket.close()
|
||||
|
||||
async def _emit_to_queue(self):
|
||||
try:
|
||||
while not self.quit.is_set():
|
||||
if isinstance(self.stream_handler, AsyncStreamHandler):
|
||||
output = await self.stream_handler.emit()
|
||||
else:
|
||||
output = await run_sync(self.stream_handler.emit)
|
||||
self.queue.put_nowait(output)
|
||||
except asyncio.CancelledError:
|
||||
logger.debug("Emit loop cancelled")
|
||||
except Exception as e:
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
logger.debug("Error in emit loop: %s", e)
|
||||
|
||||
async def _emit_loop(self):
|
||||
try:
|
||||
while not self.quit.is_set():
|
||||
output = await self.queue.get()
|
||||
|
||||
if output is not None:
|
||||
frame, output = split_output(output)
|
||||
if output is not None:
|
||||
self.set_additional_outputs(output)
|
||||
if not isinstance(frame, tuple):
|
||||
continue
|
||||
target_rate = (
|
||||
self.stream_handler.output_sample_rate
|
||||
if not self.stream_handler.phone_mode
|
||||
else 8000
|
||||
)
|
||||
mulaw_audio = convert_to_mulaw(
|
||||
frame[1], frame[0], target_rate=target_rate
|
||||
)
|
||||
audio_payload = base64.b64encode(mulaw_audio).decode("utf-8")
|
||||
|
||||
if self.websocket and self.stream_id:
|
||||
payload = {
|
||||
"event": "media",
|
||||
"media": {"payload": audio_payload},
|
||||
}
|
||||
if self.stream_handler.phone_mode:
|
||||
payload["streamSid"] = self.stream_id
|
||||
await self.websocket.send_json(payload)
|
||||
|
||||
await asyncio.sleep(0.02)
|
||||
|
||||
except asyncio.CancelledError:
|
||||
logger.debug("Emit loop cancelled")
|
||||
except Exception as e:
|
||||
import traceback
|
||||
|
||||
traceback.print_exc()
|
||||
logger.debug("Error in emit loop: %s", e)
|
||||
@@ -1,3 +0,0 @@
|
||||
from .vad import SileroVADModel, SileroVadOptions
|
||||
|
||||
__all__ = ["SileroVADModel", "SileroVadOptions"]
|
||||
@@ -1,3 +0,0 @@
|
||||
from .stt_ import get_stt_model, stt, stt_for_chunks
|
||||
|
||||
__all__ = ["stt", "stt_for_chunks", "get_stt_model"]
|
||||
@@ -1,53 +0,0 @@
|
||||
from dataclasses import dataclass
|
||||
from functools import lru_cache
|
||||
from typing import Callable
|
||||
|
||||
import numpy as np
|
||||
from numpy.typing import NDArray
|
||||
|
||||
from ..utils import AudioChunk
|
||||
|
||||
|
||||
@dataclass
|
||||
class STTModel:
|
||||
encoder: Callable
|
||||
decoder: Callable
|
||||
|
||||
|
||||
@lru_cache
|
||||
def get_stt_model() -> STTModel:
|
||||
from silero import silero_stt
|
||||
|
||||
model, decoder, _ = silero_stt(language="en", version="v6", jit_model="jit_xlarge")
|
||||
return STTModel(model, decoder)
|
||||
|
||||
|
||||
def stt(audio: tuple[int, NDArray[np.int16]]) -> str:
|
||||
model = get_stt_model()
|
||||
sr, audio_np = audio
|
||||
if audio_np.dtype != np.float32:
|
||||
print("converting")
|
||||
audio_np = audio_np.astype(np.float32) / 32768.0
|
||||
try:
|
||||
import torch
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
"PyTorch is required to run speech-to-text for stopword detection. Run `pip install torch`."
|
||||
)
|
||||
audio_torch = torch.tensor(audio_np, dtype=torch.float32)
|
||||
if audio_torch.ndim == 1:
|
||||
audio_torch = audio_torch.unsqueeze(0)
|
||||
assert audio_torch.ndim == 2, "Audio must have a batch dimension"
|
||||
print("before")
|
||||
res = model.decoder(model.encoder(audio_torch)[0])
|
||||
print("after")
|
||||
return res
|
||||
|
||||
|
||||
def stt_for_chunks(
|
||||
audio: tuple[int, NDArray[np.int16]], chunks: list[AudioChunk]
|
||||
) -> str:
|
||||
sr, audio_np = audio
|
||||
return " ".join(
|
||||
[stt((sr, audio_np[chunk["start"] : chunk["end"]])) for chunk in chunks]
|
||||
)
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,44 +0,0 @@
|
||||
---
|
||||
license: mit
|
||||
tags:
|
||||
- object-detection
|
||||
- computer-vision
|
||||
- yolov10
|
||||
datasets:
|
||||
- detection-datasets/coco
|
||||
sdk: gradio
|
||||
sdk_version: 5.0.0b1
|
||||
---
|
||||
|
||||
### Model Description
|
||||
[YOLOv10: Real-Time End-to-End Object Detection](https://arxiv.org/abs/2405.14458v1)
|
||||
|
||||
- arXiv: https://arxiv.org/abs/2405.14458v1
|
||||
- github: https://github.com/THU-MIG/yolov10
|
||||
|
||||
### Installation
|
||||
```
|
||||
pip install supervision git+https://github.com/THU-MIG/yolov10.git
|
||||
```
|
||||
|
||||
### Yolov10 Inference
|
||||
```python
|
||||
from ultralytics import YOLOv10
|
||||
import supervision as sv
|
||||
import cv2
|
||||
|
||||
IMAGE_PATH = 'dog.jpeg'
|
||||
|
||||
model = YOLOv10.from_pretrained('jameslahm/yolov10{n/s/m/b/l/x}')
|
||||
model.predict(IMAGE_PATH, show=True)
|
||||
```
|
||||
|
||||
### BibTeX Entry and Citation Info
|
||||
```
|
||||
@article{wang2024yolov10,
|
||||
title={YOLOv10: Real-Time End-to-End Object Detection},
|
||||
author={Wang, Ao and Chen, Hui and Liu, Lihao and Chen, Kai and Lin, Zijia and Han, Jungong and Ding, Guiguang},
|
||||
journal={arXiv preprint arXiv:2405.14458},
|
||||
year={2024}
|
||||
}
|
||||
```
|
||||
@@ -1,105 +0,0 @@
|
||||
import logging
|
||||
import os
|
||||
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from gradio_webrtc import AdditionalOutputs, WebRTC
|
||||
from pydub import AudioSegment
|
||||
from twilio.rest import Client
|
||||
|
||||
# Configure the root logger to WARNING to suppress debug messages from other libraries
|
||||
logging.basicConfig(level=logging.WARNING)
|
||||
|
||||
# Create a console handler
|
||||
console_handler = logging.FileHandler("gradio_webrtc.log")
|
||||
console_handler.setLevel(logging.DEBUG)
|
||||
|
||||
# Create a formatter
|
||||
formatter = logging.Formatter("%(asctime)s - %(name)s - %(levelname)s - %(message)s")
|
||||
console_handler.setFormatter(formatter)
|
||||
|
||||
# Configure the logger for your specific library
|
||||
logger = logging.getLogger("gradio_webrtc")
|
||||
logger.setLevel(logging.DEBUG)
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
|
||||
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
|
||||
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
|
||||
|
||||
if account_sid and auth_token:
|
||||
client = Client(account_sid, auth_token)
|
||||
|
||||
token = client.tokens.create()
|
||||
|
||||
rtc_configuration = {
|
||||
"iceServers": token.ice_servers,
|
||||
"iceTransportPolicy": "relay",
|
||||
}
|
||||
else:
|
||||
rtc_configuration = None
|
||||
|
||||
|
||||
def generation(num_steps):
|
||||
for i in range(num_steps):
|
||||
segment = AudioSegment.from_file(
|
||||
"/Users/freddy/sources/gradio/demo/scratch/audio-streaming/librispeech.mp3"
|
||||
)
|
||||
yield (
|
||||
(
|
||||
segment.frame_rate,
|
||||
np.array(segment.get_array_of_samples()).reshape(1, -1),
|
||||
),
|
||||
AdditionalOutputs(
|
||||
f"Hello, from step {i}!",
|
||||
"/Users/freddy/sources/gradio/demo/scratch/audio-streaming/librispeech.mp3",
|
||||
),
|
||||
)
|
||||
|
||||
|
||||
css = """.my-group {max-width: 600px !important; max-height: 600 !important;}
|
||||
.my-column {display: flex !important; justify-content: center !important; align-items: center !important};"""
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Audio Streaming (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
with gr.Column(elem_classes=["my-column"]):
|
||||
with gr.Group(elem_classes=["my-group"]):
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=rtc_configuration,
|
||||
mode="receive",
|
||||
modality="audio",
|
||||
)
|
||||
num_steps = gr.Slider(
|
||||
label="Number of Steps",
|
||||
minimum=1,
|
||||
maximum=10,
|
||||
step=1,
|
||||
value=5,
|
||||
)
|
||||
button = gr.Button("Generate")
|
||||
textbox = gr.Textbox(placeholder="Output will appear here.")
|
||||
audio_file = gr.Audio()
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio], trigger=button.click
|
||||
)
|
||||
audio.on_additional_outputs(
|
||||
fn=lambda t, a: (f"State changed to {t}.", a),
|
||||
outputs=[textbox, audio_file],
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch(
|
||||
allowed_paths=[
|
||||
"/Users/freddy/sources/gradio/demo/scratch/audio-streaming/librispeech.mp3"
|
||||
]
|
||||
)
|
||||
367
demo/app.py
367
demo/app.py
@@ -1,367 +0,0 @@
|
||||
import os
|
||||
|
||||
import gradio as gr
|
||||
|
||||
_docs = {
|
||||
"WebRTC": {
|
||||
"description": "Stream audio/video with WebRTC",
|
||||
"members": {
|
||||
"__init__": {
|
||||
"rtc_configuration": {
|
||||
"type": "dict[str, Any] | None",
|
||||
"default": "None",
|
||||
"description": "The configration dictionary to pass to the RTCPeerConnection constructor. If None, the default configuration is used.",
|
||||
},
|
||||
"height": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
|
||||
},
|
||||
"width": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "The width of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
|
||||
},
|
||||
"label": {
|
||||
"type": "str | None",
|
||||
"default": "None",
|
||||
"description": "the label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.",
|
||||
},
|
||||
"show_label": {
|
||||
"type": "bool | None",
|
||||
"default": "None",
|
||||
"description": "if True, will display label.",
|
||||
},
|
||||
"container": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if True, will place the component in a container - providing some extra padding around the border.",
|
||||
},
|
||||
"scale": {
|
||||
"type": "int | None",
|
||||
"default": "None",
|
||||
"description": "relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.",
|
||||
},
|
||||
"min_width": {
|
||||
"type": "int",
|
||||
"default": "160",
|
||||
"description": "minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.",
|
||||
},
|
||||
"interactive": {
|
||||
"type": "bool | None",
|
||||
"default": "None",
|
||||
"description": "if True, will allow users to upload a video; if False, can only be used to display videos. If not provided, this is inferred based on whether the component is used as an input or output.",
|
||||
},
|
||||
"visible": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if False, component will be hidden.",
|
||||
},
|
||||
"elem_id": {
|
||||
"type": "str | None",
|
||||
"default": "None",
|
||||
"description": "an optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.",
|
||||
},
|
||||
"elem_classes": {
|
||||
"type": "list[str] | str | None",
|
||||
"default": "None",
|
||||
"description": "an optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.",
|
||||
},
|
||||
"render": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.",
|
||||
},
|
||||
"key": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.",
|
||||
},
|
||||
"mirror_webcam": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if True webcam will be mirrored. Default is True.",
|
||||
},
|
||||
},
|
||||
"events": {"tick": {"type": None, "default": None, "description": ""}},
|
||||
},
|
||||
"__meta__": {"additional_interfaces": {}, "user_fn_refs": {"WebRTC": []}},
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
abs_path = os.path.join(os.path.dirname(__file__), "css.css")
|
||||
|
||||
with gr.Blocks(
|
||||
css_paths=abs_path,
|
||||
theme=gr.themes.Default(
|
||||
font_mono=[
|
||||
gr.themes.GoogleFont("Inconsolata"),
|
||||
"monospace",
|
||||
],
|
||||
),
|
||||
) as demo:
|
||||
gr.Markdown(
|
||||
"""
|
||||
<h1 style='text-align: center; margin-bottom: 1rem'> Gradio WebRTC ⚡️ </h1>
|
||||
|
||||
<div style="display: flex; flex-direction: row; justify-content: center">
|
||||
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.6%20-%20orange">
|
||||
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
</div>
|
||||
""",
|
||||
elem_classes=["md-custom"],
|
||||
header_links=True,
|
||||
)
|
||||
gr.Markdown(
|
||||
"""
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc
|
||||
```
|
||||
|
||||
## Examples:
|
||||
1. [Object Detection from Webcam with YOLOv10](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n) 📷
|
||||
2. [Streaming Object Detection from Video with RT-DETR](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc) 🎥
|
||||
3. [Text-to-Speech](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc) 🗣️
|
||||
4. [Conversational AI](https://huggingface.co/spaces/freddyaboulton/omni-mini-webrtc) 🤖🗣️
|
||||
|
||||
## Usage
|
||||
|
||||
The WebRTC component supports the following three use cases:
|
||||
1. [Streaming video from the user webcam to the server and back](#h-streaming-video-from-the-user-webcam-to-the-server-and-back)
|
||||
2. [Streaming Video from the server to the client](#h-streaming-video-from-the-server-to-the-client)
|
||||
3. [Streaming Audio from the server to the client](#h-streaming-audio-from-the-server-to-the-client)
|
||||
4. [Streaming Audio from the client to the server and back (conversational AI)](#h-conversational-ai)
|
||||
|
||||
|
||||
## Streaming Video from the User Webcam to the Server and Back
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
|
||||
|
||||
def detection(image, conf_threshold=0.3):
|
||||
... your detection code here ...
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
image = WebRTC(label="Stream", mode="send-receive", modality="video")
|
||||
conf_threshold = gr.Slider(
|
||||
label="Confidence Threshold",
|
||||
minimum=0.0,
|
||||
maximum=1.0,
|
||||
step=0.05,
|
||||
value=0.30,
|
||||
)
|
||||
image.stream(
|
||||
fn=detection,
|
||||
inputs=[image, conf_threshold],
|
||||
outputs=[image], time_limit=10
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
* Set the `mode` parameter to `send-receive` and `modality` to "video".
|
||||
* The `stream` event's `fn` parameter is a function that receives the next frame from the webcam
|
||||
as a **numpy array** and returns the processed frame also as a **numpy array**.
|
||||
* Numpy arrays are in (height, width, 3) format where the color channels are in RGB format.
|
||||
* The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
|
||||
* The `time_limit` parameter is the maximum time in seconds the video stream will run. If the time limit is reached, the video stream will stop.
|
||||
|
||||
## Streaming Video from the server to the client
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
import cv2
|
||||
|
||||
def generation():
|
||||
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
|
||||
cap = cv2.VideoCapture(url)
|
||||
iterating = True
|
||||
while iterating:
|
||||
iterating, frame = cap.read()
|
||||
yield frame
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
output_video = WebRTC(label="Video Stream", mode="receive", modality="video")
|
||||
button = gr.Button("Start", variant="primary")
|
||||
output_video.stream(
|
||||
fn=generation, inputs=None, outputs=[output_video],
|
||||
trigger=button.click
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
* Set the "mode" parameter to "receive" and "modality" to "video".
|
||||
* The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
|
||||
* The only output allowed is the WebRTC component.
|
||||
* The `trigger` parameter the gradio event that will trigger the webrtc connection. In this case, the button click event.
|
||||
|
||||
## Streaming Audio from the Server to the Client
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from pydub import AudioSegment
|
||||
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file("/Users/freddy/sources/gradio/demo/audio_debugger/cantina.wav")
|
||||
yield (segment.frame_rate, np.array(segment.get_array_of_samples()).reshape(1, -1))
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
audio = WebRTC(label="Stream", mode="receive", modality="audio")
|
||||
num_steps = gr.Slider(
|
||||
label="Number of Steps",
|
||||
minimum=1,
|
||||
maximum=10,
|
||||
step=1,
|
||||
value=5,
|
||||
)
|
||||
button = gr.Button("Generate")
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio],
|
||||
trigger=button.click
|
||||
)
|
||||
```
|
||||
|
||||
* Set the "mode" parameter to "receive" and "modality" to "audio".
|
||||
* The `stream` event's `fn` parameter is a generator function that yields the next audio segment as a tuple of (frame_rate, audio_samples).
|
||||
* The numpy array should be of shape (1, num_samples).
|
||||
* The `outputs` parameter should be a list with the WebRTC component as the only element.
|
||||
|
||||
## Conversational AI
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from gradio_webrtc import WebRTC, StreamHandler
|
||||
from queue import Queue
|
||||
import time
|
||||
|
||||
|
||||
class EchoHandler(StreamHandler):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.queue = Queue()
|
||||
|
||||
def receive(self, frame: tuple[int, np.ndarray] | np.ndarray) -> None:
|
||||
self.queue.put(frame)
|
||||
|
||||
def emit(self) -> None:
|
||||
return self.queue.get()
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=None,
|
||||
mode="send-receive",
|
||||
modality="audio",
|
||||
)
|
||||
|
||||
audio.stream(fn=EchoHandler(), inputs=[audio], outputs=[audio], time_limit=15)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
* Instead of passing a function to the `stream` event's `fn` parameter, pass a `StreamHandler` implementation. The `StreamHandler` above simply echoes the audio back to the client.
|
||||
* The `StreamHandler` class has two methods: `receive` and `emit`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client.
|
||||
* An audio frame is represented as a tuple of (frame_rate, audio_samples) where `audio_samples` is a numpy array of shape (num_channels, num_samples).
|
||||
* You can also specify the audio layout ("mono" or "stereo") in the emit method by retuning it as the third element of the tuple. If not specified, the default is "mono".
|
||||
* The `time_limit` parameter is the maximum time in seconds the conversation will run. If the time limit is reached, the audio stream will stop.
|
||||
* The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return None.
|
||||
|
||||
## Deployment
|
||||
|
||||
When deploying in a cloud environment (like Hugging Face Spaces, EC2, etc), you need to set up a TURN server to relay the WebRTC traffic.
|
||||
The easiest way to do this is to use a service like Twilio.
|
||||
|
||||
```python
|
||||
from twilio.rest import Client
|
||||
import os
|
||||
|
||||
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
|
||||
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
|
||||
|
||||
client = Client(account_sid, auth_token)
|
||||
|
||||
token = client.tokens.create()
|
||||
|
||||
rtc_configuration = {
|
||||
"iceServers": token.ice_servers,
|
||||
"iceTransportPolicy": "relay",
|
||||
}
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
...
|
||||
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
|
||||
...
|
||||
```
|
||||
""",
|
||||
elem_classes=["md-custom"],
|
||||
header_links=True,
|
||||
)
|
||||
|
||||
gr.Markdown(
|
||||
"""
|
||||
##
|
||||
""",
|
||||
elem_classes=["md-custom"],
|
||||
header_links=True,
|
||||
)
|
||||
|
||||
gr.ParamViewer(value=_docs["WebRTC"]["members"]["__init__"], linkify=[])
|
||||
|
||||
demo.load(
|
||||
None,
|
||||
js=r"""function() {
|
||||
const refs = {};
|
||||
const user_fn_refs = {
|
||||
WebRTC: [], };
|
||||
requestAnimationFrame(() => {
|
||||
|
||||
Object.entries(user_fn_refs).forEach(([key, refs]) => {
|
||||
if (refs.length > 0) {
|
||||
const el = document.querySelector(`.${key}-user-fn`);
|
||||
if (!el) return;
|
||||
refs.forEach(ref => {
|
||||
el.innerHTML = el.innerHTML.replace(
|
||||
new RegExp("\\b"+ref+"\\b", "g"),
|
||||
`<a href="#h-${ref.toLowerCase()}">${ref}</a>`
|
||||
);
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
Object.entries(refs).forEach(([key, refs]) => {
|
||||
if (refs.length > 0) {
|
||||
const el = document.querySelector(`.${key}`);
|
||||
if (!el) return;
|
||||
refs.forEach(ref => {
|
||||
el.innerHTML = el.innerHTML.replace(
|
||||
new RegExp("\\b"+ref+"\\b", "g"),
|
||||
`<a href="#h-${ref.toLowerCase()}">${ref}</a>`
|
||||
);
|
||||
})
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
""",
|
||||
)
|
||||
|
||||
demo.launch()
|
||||
367
demo/app_.py
367
demo/app_.py
@@ -1,367 +0,0 @@
|
||||
import os
|
||||
|
||||
import gradio as gr
|
||||
|
||||
_docs = {
|
||||
"WebRTC": {
|
||||
"description": "Stream audio/video with WebRTC",
|
||||
"members": {
|
||||
"__init__": {
|
||||
"rtc_configuration": {
|
||||
"type": "dict[str, Any] | None",
|
||||
"default": "None",
|
||||
"description": "The configration dictionary to pass to the RTCPeerConnection constructor. If None, the default configuration is used.",
|
||||
},
|
||||
"height": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
|
||||
},
|
||||
"width": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "The width of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
|
||||
},
|
||||
"label": {
|
||||
"type": "str | None",
|
||||
"default": "None",
|
||||
"description": "the label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.",
|
||||
},
|
||||
"show_label": {
|
||||
"type": "bool | None",
|
||||
"default": "None",
|
||||
"description": "if True, will display label.",
|
||||
},
|
||||
"container": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if True, will place the component in a container - providing some extra padding around the border.",
|
||||
},
|
||||
"scale": {
|
||||
"type": "int | None",
|
||||
"default": "None",
|
||||
"description": "relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.",
|
||||
},
|
||||
"min_width": {
|
||||
"type": "int",
|
||||
"default": "160",
|
||||
"description": "minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.",
|
||||
},
|
||||
"interactive": {
|
||||
"type": "bool | None",
|
||||
"default": "None",
|
||||
"description": "if True, will allow users to upload a video; if False, can only be used to display videos. If not provided, this is inferred based on whether the component is used as an input or output.",
|
||||
},
|
||||
"visible": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if False, component will be hidden.",
|
||||
},
|
||||
"elem_id": {
|
||||
"type": "str | None",
|
||||
"default": "None",
|
||||
"description": "an optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.",
|
||||
},
|
||||
"elem_classes": {
|
||||
"type": "list[str] | str | None",
|
||||
"default": "None",
|
||||
"description": "an optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.",
|
||||
},
|
||||
"render": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.",
|
||||
},
|
||||
"key": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.",
|
||||
},
|
||||
"mirror_webcam": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if True webcam will be mirrored. Default is True.",
|
||||
},
|
||||
},
|
||||
"events": {"tick": {"type": None, "default": None, "description": ""}},
|
||||
},
|
||||
"__meta__": {"additional_interfaces": {}, "user_fn_refs": {"WebRTC": []}},
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
abs_path = os.path.join(os.path.dirname(__file__), "css.css")
|
||||
|
||||
with gr.Blocks(
|
||||
css_paths=abs_path,
|
||||
theme=gr.themes.Default(
|
||||
font_mono=[
|
||||
gr.themes.GoogleFont("Inconsolata"),
|
||||
"monospace",
|
||||
],
|
||||
),
|
||||
) as demo:
|
||||
gr.Markdown(
|
||||
"""
|
||||
<h1 style='text-align: center; margin-bottom: 1rem'> Gradio WebRTC ⚡️ </h1>
|
||||
|
||||
<div style="display: flex; flex-direction: row; justify-content: center">
|
||||
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/badge/version%20-%200.0.6%20-%20orange">
|
||||
<a href="https://github.com/freddyaboulton/gradio-webrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
|
||||
</div>
|
||||
""",
|
||||
elem_classes=["md-custom"],
|
||||
header_links=True,
|
||||
)
|
||||
gr.Markdown(
|
||||
"""
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install gradio_webrtc
|
||||
```
|
||||
|
||||
## Examples:
|
||||
1. [Object Detection from Webcam with YOLOv10](https://huggingface.co/spaces/freddyaboulton/webrtc-yolov10n) 📷
|
||||
2. [Streaming Object Detection from Video with RT-DETR](https://huggingface.co/spaces/freddyaboulton/rt-detr-object-detection-webrtc) 🎥
|
||||
3. [Text-to-Speech](https://huggingface.co/spaces/freddyaboulton/parler-tts-streaming-webrtc) 🗣️
|
||||
4. [Conversational AI](https://huggingface.co/spaces/freddyaboulton/omni-mini-webrtc) 🤖🗣️
|
||||
|
||||
## Usage
|
||||
|
||||
The WebRTC component supports the following three use cases:
|
||||
1. [Streaming video from the user webcam to the server and back](#h-streaming-video-from-the-user-webcam-to-the-server-and-back)
|
||||
2. [Streaming Video from the server to the client](#h-streaming-video-from-the-server-to-the-client)
|
||||
3. [Streaming Audio from the server to the client](#h-streaming-audio-from-the-server-to-the-client)
|
||||
4. [Streaming Audio from the client to the server and back (conversational AI)](#h-conversational-ai)
|
||||
|
||||
|
||||
## Streaming Video from the User Webcam to the Server and Back
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
|
||||
|
||||
def detection(image, conf_threshold=0.3):
|
||||
... your detection code here ...
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
image = WebRTC(label="Stream", mode="send-receive", modality="video")
|
||||
conf_threshold = gr.Slider(
|
||||
label="Confidence Threshold",
|
||||
minimum=0.0,
|
||||
maximum=1.0,
|
||||
step=0.05,
|
||||
value=0.30,
|
||||
)
|
||||
image.stream(
|
||||
fn=detection,
|
||||
inputs=[image, conf_threshold],
|
||||
outputs=[image], time_limit=10
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
|
||||
```
|
||||
* Set the `mode` parameter to `send-receive` and `modality` to "video".
|
||||
* The `stream` event's `fn` parameter is a function that receives the next frame from the webcam
|
||||
as a **numpy array** and returns the processed frame also as a **numpy array**.
|
||||
* Numpy arrays are in (height, width, 3) format where the color channels are in RGB format.
|
||||
* The `inputs` parameter should be a list where the first element is the WebRTC component. The only output allowed is the WebRTC component.
|
||||
* The `time_limit` parameter is the maximum time in seconds the video stream will run. If the time limit is reached, the video stream will stop.
|
||||
|
||||
## Streaming Video from the server to the client
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
import cv2
|
||||
|
||||
def generation():
|
||||
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
|
||||
cap = cv2.VideoCapture(url)
|
||||
iterating = True
|
||||
while iterating:
|
||||
iterating, frame = cap.read()
|
||||
yield frame
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
output_video = WebRTC(label="Video Stream", mode="receive", modality="video")
|
||||
button = gr.Button("Start", variant="primary")
|
||||
output_video.stream(
|
||||
fn=generation, inputs=None, outputs=[output_video],
|
||||
trigger=button.click
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
* Set the "mode" parameter to "receive" and "modality" to "video".
|
||||
* The `stream` event's `fn` parameter is a generator function that yields the next frame from the video as a **numpy array**.
|
||||
* The only output allowed is the WebRTC component.
|
||||
* The `trigger` parameter the gradio event that will trigger the webrtc connection. In this case, the button click event.
|
||||
|
||||
## Streaming Audio from the Server to the Client
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
from pydub import AudioSegment
|
||||
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file("/Users/freddy/sources/gradio/demo/audio_debugger/cantina.wav")
|
||||
yield (segment.frame_rate, np.array(segment.get_array_of_samples()).reshape(1, -1))
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
audio = WebRTC(label="Stream", mode="receive", modality="audio")
|
||||
num_steps = gr.Slider(
|
||||
label="Number of Steps",
|
||||
minimum=1,
|
||||
maximum=10,
|
||||
step=1,
|
||||
value=5,
|
||||
)
|
||||
button = gr.Button("Generate")
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio],
|
||||
trigger=button.click
|
||||
)
|
||||
```
|
||||
|
||||
* Set the "mode" parameter to "receive" and "modality" to "audio".
|
||||
* The `stream` event's `fn` parameter is a generator function that yields the next audio segment as a tuple of (frame_rate, audio_samples).
|
||||
* The numpy array should be of shape (1, num_samples).
|
||||
* The `outputs` parameter should be a list with the WebRTC component as the only element.
|
||||
|
||||
## Conversational AI
|
||||
|
||||
```python
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from gradio_webrtc import WebRTC, StreamHandler
|
||||
from queue import Queue
|
||||
import time
|
||||
|
||||
|
||||
class EchoHandler(StreamHandler):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.queue = Queue()
|
||||
|
||||
def receive(self, frame: tuple[int, np.ndarray] | np.ndarray) -> None:
|
||||
self.queue.put(frame)
|
||||
|
||||
def emit(self) -> None:
|
||||
return self.queue.get()
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=None,
|
||||
mode="send-receive",
|
||||
modality="audio",
|
||||
)
|
||||
|
||||
audio.stream(fn=EchoHandler(), inputs=[audio], outputs=[audio], time_limit=15)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
```
|
||||
|
||||
* Instead of passing a function to the `stream` event's `fn` parameter, pass a `StreamHandler` implementation. The `StreamHandler` above simply echoes the audio back to the client.
|
||||
* The `StreamHandler` class has two methods: `receive` and `emit`. The `receive` method is called when a new frame is received from the client, and the `emit` method returns the next frame to send to the client.
|
||||
* An audio frame is represented as a tuple of (frame_rate, audio_samples) where `audio_samples` is a numpy array of shape (num_channels, num_samples).
|
||||
* You can also specify the audio layout ("mono" or "stereo") in the emit method by retuning it as the third element of the tuple. If not specified, the default is "mono".
|
||||
* The `time_limit` parameter is the maximum time in seconds the conversation will run. If the time limit is reached, the audio stream will stop.
|
||||
* The `emit` method SHOULD NOT block. If a frame is not ready to be sent, the method should return None.
|
||||
|
||||
## Deployment
|
||||
|
||||
When deploying in a cloud environment (like Hugging Face Spaces, EC2, etc), you need to set up a TURN server to relay the WebRTC traffic.
|
||||
The easiest way to do this is to use a service like Twilio.
|
||||
|
||||
```python
|
||||
from twilio.rest import Client
|
||||
import os
|
||||
|
||||
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
|
||||
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
|
||||
|
||||
client = Client(account_sid, auth_token)
|
||||
|
||||
token = client.tokens.create()
|
||||
|
||||
rtc_configuration = {
|
||||
"iceServers": token.ice_servers,
|
||||
"iceTransportPolicy": "relay",
|
||||
}
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
...
|
||||
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
|
||||
...
|
||||
```
|
||||
""",
|
||||
elem_classes=["md-custom"],
|
||||
header_links=True,
|
||||
)
|
||||
|
||||
gr.Markdown(
|
||||
"""
|
||||
##
|
||||
""",
|
||||
elem_classes=["md-custom"],
|
||||
header_links=True,
|
||||
)
|
||||
|
||||
gr.ParamViewer(value=_docs["WebRTC"]["members"]["__init__"], linkify=[])
|
||||
|
||||
demo.load(
|
||||
None,
|
||||
js=r"""function() {
|
||||
const refs = {};
|
||||
const user_fn_refs = {
|
||||
WebRTC: [], };
|
||||
requestAnimationFrame(() => {
|
||||
|
||||
Object.entries(user_fn_refs).forEach(([key, refs]) => {
|
||||
if (refs.length > 0) {
|
||||
const el = document.querySelector(`.${key}-user-fn`);
|
||||
if (!el) return;
|
||||
refs.forEach(ref => {
|
||||
el.innerHTML = el.innerHTML.replace(
|
||||
new RegExp("\\b"+ref+"\\b", "g"),
|
||||
`<a href="#h-${ref.toLowerCase()}">${ref}</a>`
|
||||
);
|
||||
})
|
||||
}
|
||||
})
|
||||
|
||||
Object.entries(refs).forEach(([key, refs]) => {
|
||||
if (refs.length > 0) {
|
||||
const el = document.querySelector(`.${key}`);
|
||||
if (!el) return;
|
||||
refs.forEach(ref => {
|
||||
el.innerHTML = el.innerHTML.replace(
|
||||
new RegExp("\\b"+ref+"\\b", "g"),
|
||||
`<a href="#h-${ref.toLowerCase()}">${ref}</a>`
|
||||
);
|
||||
})
|
||||
}
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
""",
|
||||
)
|
||||
|
||||
demo.launch()
|
||||
@@ -1,73 +0,0 @@
|
||||
import os
|
||||
|
||||
import cv2
|
||||
import gradio as gr
|
||||
from gradio_webrtc import WebRTC
|
||||
from huggingface_hub import hf_hub_download
|
||||
from inference import YOLOv10
|
||||
from twilio.rest import Client
|
||||
|
||||
model_file = hf_hub_download(
|
||||
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
|
||||
)
|
||||
|
||||
model = YOLOv10(model_file)
|
||||
|
||||
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
|
||||
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
|
||||
|
||||
if account_sid and auth_token:
|
||||
client = Client(account_sid, auth_token)
|
||||
|
||||
token = client.tokens.create()
|
||||
|
||||
rtc_configuration = {
|
||||
"iceServers": token.ice_servers,
|
||||
"iceTransportPolicy": "relay",
|
||||
}
|
||||
else:
|
||||
rtc_configuration = None
|
||||
|
||||
|
||||
def detection(image, conf_threshold=0.3):
|
||||
image = cv2.resize(image, (model.input_width, model.input_height))
|
||||
new_image = model.detect_objects(image, conf_threshold)
|
||||
return cv2.resize(new_image, (500, 500))
|
||||
|
||||
|
||||
css = """.my-group {max-width: 600px !important; max-height: 600 !important;}
|
||||
.my-column {display: flex !important; justify-content: center !important; align-items: center !important};"""
|
||||
|
||||
|
||||
with gr.Blocks(css=css) as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
YOLOv10 Webcam Stream (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
gr.HTML(
|
||||
"""
|
||||
<h3 style='text-align: center'>
|
||||
<a href='https://arxiv.org/abs/2405.14458' target='_blank'>arXiv</a> | <a href='https://github.com/THU-MIG/yolov10' target='_blank'>github</a>
|
||||
</h3>
|
||||
"""
|
||||
)
|
||||
with gr.Column(elem_classes=["my-column"]):
|
||||
with gr.Group(elem_classes=["my-group"]):
|
||||
image = WebRTC(label="Stream", rtc_configuration=rtc_configuration)
|
||||
conf_threshold = gr.Slider(
|
||||
label="Confidence Threshold",
|
||||
minimum=0.0,
|
||||
maximum=1.0,
|
||||
step=0.05,
|
||||
value=0.30,
|
||||
)
|
||||
|
||||
image.stream(
|
||||
fn=detection, inputs=[image, conf_threshold], outputs=[image], time_limit=10
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
@@ -1,71 +0,0 @@
|
||||
import os
|
||||
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from gradio_webrtc import WebRTC
|
||||
from pydub import AudioSegment
|
||||
from twilio.rest import Client
|
||||
|
||||
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
|
||||
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
|
||||
|
||||
if account_sid and auth_token:
|
||||
client = Client(account_sid, auth_token)
|
||||
|
||||
token = client.tokens.create()
|
||||
|
||||
rtc_configuration = {
|
||||
"iceServers": token.ice_servers,
|
||||
"iceTransportPolicy": "relay",
|
||||
}
|
||||
else:
|
||||
rtc_configuration = None
|
||||
|
||||
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file(
|
||||
"/Users/freddy/sources/gradio/demo/audio_debugger/cantina.wav"
|
||||
)
|
||||
yield (
|
||||
segment.frame_rate,
|
||||
np.array(segment.get_array_of_samples()).reshape(1, -1),
|
||||
)
|
||||
|
||||
|
||||
css = """.my-group {max-width: 600px !important; max-height: 600 !important;}
|
||||
.my-column {display: flex !important; justify-content: center !important; align-items: center !important};"""
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Audio Streaming (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
with gr.Column(elem_classes=["my-column"]):
|
||||
with gr.Group(elem_classes=["my-group"]):
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=rtc_configuration,
|
||||
mode="receive",
|
||||
modality="audio",
|
||||
)
|
||||
num_steps = gr.Slider(
|
||||
label="Number of Steps",
|
||||
minimum=1,
|
||||
maximum=10,
|
||||
step=1,
|
||||
value=5,
|
||||
)
|
||||
button = gr.Button("Generate")
|
||||
|
||||
audio.stream(
|
||||
fn=generation, inputs=[num_steps], outputs=[audio], trigger=button.click
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
@@ -1,64 +0,0 @@
|
||||
import os
|
||||
import time
|
||||
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from gradio_webrtc import WebRTC
|
||||
from pydub import AudioSegment
|
||||
from twilio.rest import Client
|
||||
|
||||
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
|
||||
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
|
||||
|
||||
if account_sid and auth_token:
|
||||
client = Client(account_sid, auth_token)
|
||||
|
||||
token = client.tokens.create()
|
||||
|
||||
rtc_configuration = {
|
||||
"iceServers": token.ice_servers,
|
||||
"iceTransportPolicy": "relay",
|
||||
}
|
||||
else:
|
||||
rtc_configuration = None
|
||||
|
||||
|
||||
def generation(num_steps):
|
||||
for _ in range(num_steps):
|
||||
segment = AudioSegment.from_file(
|
||||
"/Users/freddy/sources/gradio/demo/audio_debugger/cantina.wav"
|
||||
)
|
||||
yield (
|
||||
segment.frame_rate,
|
||||
np.array(segment.get_array_of_samples()).reshape(1, -1),
|
||||
)
|
||||
time.sleep(3.5)
|
||||
|
||||
|
||||
css = """.my-group {max-width: 600px !important; max-height: 600 !important;}
|
||||
.my-column {display: flex !important; justify-content: center !important; align-items: center !important};"""
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Audio Streaming (Powered by WebRaTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
with gr.Row():
|
||||
with gr.Column():
|
||||
gr.Slider()
|
||||
with gr.Column():
|
||||
# audio = gr.Audio(interactive=False)
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=rtc_configuration,
|
||||
mode="receive",
|
||||
modality="audio",
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
161
demo/css.css
161
demo/css.css
@@ -1,161 +0,0 @@
|
||||
html {
|
||||
font-family: Inter;
|
||||
font-size: 16px;
|
||||
font-weight: 400;
|
||||
line-height: 1.5;
|
||||
-webkit-text-size-adjust: 100%;
|
||||
background: #fff;
|
||||
color: #323232;
|
||||
-webkit-font-smoothing: antialiased;
|
||||
-moz-osx-font-smoothing: grayscale;
|
||||
text-rendering: optimizeLegibility;
|
||||
}
|
||||
|
||||
:root {
|
||||
--space: 1;
|
||||
--vspace: calc(var(--space) * 1rem);
|
||||
--vspace-0: calc(3 * var(--space) * 1rem);
|
||||
--vspace-1: calc(2 * var(--space) * 1rem);
|
||||
--vspace-2: calc(1.5 * var(--space) * 1rem);
|
||||
--vspace-3: calc(0.5 * var(--space) * 1rem);
|
||||
}
|
||||
|
||||
.app {
|
||||
max-width: 748px !important;
|
||||
}
|
||||
|
||||
.prose p {
|
||||
margin: var(--vspace) 0;
|
||||
line-height: var(--vspace * 2);
|
||||
font-size: 1rem;
|
||||
}
|
||||
|
||||
code {
|
||||
font-family: "Inconsolata", sans-serif;
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
h1,
|
||||
h1 code {
|
||||
font-weight: 400;
|
||||
line-height: calc(2.5 / var(--space) * var(--vspace));
|
||||
}
|
||||
|
||||
h1 code {
|
||||
background: none;
|
||||
border: none;
|
||||
letter-spacing: 0.05em;
|
||||
padding-bottom: 5px;
|
||||
position: relative;
|
||||
padding: 0;
|
||||
}
|
||||
|
||||
h2 {
|
||||
margin: var(--vspace-1) 0 var(--vspace-2) 0;
|
||||
line-height: 1em;
|
||||
}
|
||||
|
||||
h3,
|
||||
h3 code {
|
||||
margin: var(--vspace-1) 0 var(--vspace-2) 0;
|
||||
line-height: 1em;
|
||||
}
|
||||
|
||||
h4,
|
||||
h5,
|
||||
h6 {
|
||||
margin: var(--vspace-3) 0 var(--vspace-3) 0;
|
||||
line-height: var(--vspace);
|
||||
}
|
||||
|
||||
.bigtitle,
|
||||
h1,
|
||||
h1 code {
|
||||
font-size: calc(8px * 4.5);
|
||||
word-break: break-word;
|
||||
}
|
||||
|
||||
.title,
|
||||
h2,
|
||||
h2 code {
|
||||
font-size: calc(8px * 3.375);
|
||||
font-weight: lighter;
|
||||
word-break: break-word;
|
||||
border: none;
|
||||
background: none;
|
||||
}
|
||||
|
||||
.subheading1,
|
||||
h3,
|
||||
h3 code {
|
||||
font-size: calc(8px * 1.8);
|
||||
font-weight: 600;
|
||||
border: none;
|
||||
background: none;
|
||||
letter-spacing: 0.1em;
|
||||
text-transform: uppercase;
|
||||
}
|
||||
|
||||
h2 code {
|
||||
padding: 0;
|
||||
position: relative;
|
||||
letter-spacing: 0.05em;
|
||||
}
|
||||
|
||||
blockquote {
|
||||
font-size: calc(8px * 1.1667);
|
||||
font-style: italic;
|
||||
line-height: calc(1.1667 * var(--vspace));
|
||||
margin: var(--vspace-2) var(--vspace-2);
|
||||
}
|
||||
|
||||
.subheading2,
|
||||
h4 {
|
||||
font-size: calc(8px * 1.4292);
|
||||
text-transform: uppercase;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.subheading3,
|
||||
h5 {
|
||||
font-size: calc(8px * 1.2917);
|
||||
line-height: calc(1.2917 * var(--vspace));
|
||||
|
||||
font-weight: lighter;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.15em;
|
||||
}
|
||||
|
||||
h6 {
|
||||
font-size: calc(8px * 1.1667);
|
||||
font-size: 1.1667em;
|
||||
font-weight: normal;
|
||||
font-style: italic;
|
||||
font-family: "le-monde-livre-classic-byol", serif !important;
|
||||
letter-spacing: 0px !important;
|
||||
}
|
||||
|
||||
#start .md > *:first-child {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
h2 + h3 {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
.md hr {
|
||||
border: none;
|
||||
border-top: 1px solid var(--block-border-color);
|
||||
margin: var(--vspace-2) 0 var(--vspace-2) 0;
|
||||
}
|
||||
.prose ul {
|
||||
margin: var(--vspace-2) 0 var(--vspace-1) 0;
|
||||
}
|
||||
|
||||
.gap {
|
||||
gap: 0;
|
||||
}
|
||||
|
||||
.md-custom {
|
||||
overflow: hidden;
|
||||
}
|
||||
99
demo/docs.py
99
demo/docs.py
@@ -1,99 +0,0 @@
|
||||
_docs = {
|
||||
"WebRTC": {
|
||||
"description": "Stream audio/video with WebRTC",
|
||||
"members": {
|
||||
"__init__": {
|
||||
"rtc_configuration": {
|
||||
"type": "dict[str, Any] | None",
|
||||
"default": "None",
|
||||
"description": "The configration dictionary to pass to the RTCPeerConnection constructor. If None, the default configuration is used.",
|
||||
},
|
||||
"height": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "The height of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
|
||||
},
|
||||
"width": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "The width of the component, specified in pixels if a number is passed, or in CSS units if a string is passed. This has no effect on the preprocessed video file, but will affect the displayed video.",
|
||||
},
|
||||
"label": {
|
||||
"type": "str | None",
|
||||
"default": "None",
|
||||
"description": "the label for this component. Appears above the component and is also used as the header if there are a table of examples for this component. If None and used in a `gr.Interface`, the label will be the name of the parameter this component is assigned to.",
|
||||
},
|
||||
"show_label": {
|
||||
"type": "bool | None",
|
||||
"default": "None",
|
||||
"description": "if True, will display label.",
|
||||
},
|
||||
"container": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if True, will place the component in a container - providing some extra padding around the border.",
|
||||
},
|
||||
"scale": {
|
||||
"type": "int | None",
|
||||
"default": "None",
|
||||
"description": "relative size compared to adjacent Components. For example if Components A and B are in a Row, and A has scale=2, and B has scale=1, A will be twice as wide as B. Should be an integer. scale applies in Rows, and to top-level Components in Blocks where fill_height=True.",
|
||||
},
|
||||
"min_width": {
|
||||
"type": "int",
|
||||
"default": "160",
|
||||
"description": "minimum pixel width, will wrap if not sufficient screen space to satisfy this value. If a certain scale value results in this Component being narrower than min_width, the min_width parameter will be respected first.",
|
||||
},
|
||||
"interactive": {
|
||||
"type": "bool | None",
|
||||
"default": "None",
|
||||
"description": "if True, will allow users to upload a video; if False, can only be used to display videos. If not provided, this is inferred based on whether the component is used as an input or output.",
|
||||
},
|
||||
"visible": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if False, component will be hidden.",
|
||||
},
|
||||
"elem_id": {
|
||||
"type": "str | None",
|
||||
"default": "None",
|
||||
"description": "an optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles.",
|
||||
},
|
||||
"elem_classes": {
|
||||
"type": "list[str] | str | None",
|
||||
"default": "None",
|
||||
"description": "an optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles.",
|
||||
},
|
||||
"render": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if False, component will not render be rendered in the Blocks context. Should be used if the intention is to assign event listeners now but render the component later.",
|
||||
},
|
||||
"key": {
|
||||
"type": "int | str | None",
|
||||
"default": "None",
|
||||
"description": "if assigned, will be used to assume identity across a re-render. Components that have the same key across a re-render will have their value preserved.",
|
||||
},
|
||||
"mirror_webcam": {
|
||||
"type": "bool",
|
||||
"default": "True",
|
||||
"description": "if True webcam will be mirrored. Default is True.",
|
||||
},
|
||||
"postprocess": {
|
||||
"value": {
|
||||
"type": "typing.Any",
|
||||
"description": "Expects a {str} or {pathlib.Path} filepath to a video which is displayed, or a {Tuple[str | pathlib.Path, str | pathlib.Path | None]} where the first element is a filepath to a video and the second element is an optional filepath to a subtitle file.",
|
||||
}
|
||||
},
|
||||
"preprocess": {
|
||||
"return": {
|
||||
"type": "str",
|
||||
"description": "Passes the uploaded video as a `str` filepath or URL whose extension can be modified by `format`.",
|
||||
},
|
||||
"value": None,
|
||||
},
|
||||
},
|
||||
"events": {"tick": {"type": None, "default": None, "description": ""}},
|
||||
},
|
||||
"__meta__": {"additional_interfaces": {}, "user_fn_refs": {"WebRTC": []}},
|
||||
}
|
||||
}
|
||||
15
demo/echo_audio/README.md
Normal file
15
demo/echo_audio/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: Echo Audio
|
||||
emoji: 🪩
|
||||
colorFrom: purple
|
||||
colorTo: red
|
||||
sdk: gradio
|
||||
sdk_version: 5.16.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: Simple echo stream - simplest FastRTC demo
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
45
demo/echo_audio/app.py
Normal file
45
demo/echo_audio/app.py
Normal file
@@ -0,0 +1,45 @@
|
||||
import numpy as np
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import RedirectResponse
|
||||
from fastrtc import ReplyOnPause, Stream, get_twilio_turn_credentials
|
||||
from gradio.utils import get_space
|
||||
|
||||
|
||||
def detection(audio: tuple[int, np.ndarray]):
|
||||
# Implement any iterator that yields audio
|
||||
# See "LLM Voice Chat" for a more complete example
|
||||
yield audio
|
||||
|
||||
|
||||
stream = Stream(
|
||||
handler=ReplyOnPause(detection),
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
rtc_configuration=get_twilio_turn_credentials() if get_space() else None,
|
||||
concurrency_limit=5 if get_space() else None,
|
||||
time_limit=90 if get_space() else None,
|
||||
)
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
stream.mount(app)
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def index():
|
||||
return RedirectResponse(
|
||||
url="/ui" if not get_space() else "https://fastrtc-echo-audio.hf.space/ui/"
|
||||
)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import os
|
||||
|
||||
if (mode := os.getenv("MODE")) == "UI":
|
||||
stream.ui.launch(server_port=7860)
|
||||
elif mode == "PHONE":
|
||||
stream.fastphone(port=7860)
|
||||
else:
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run(app, host="0.0.0.0", port=7860)
|
||||
3
demo/echo_audio/requirements.txt
Normal file
3
demo/echo_audio/requirements.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
fastrtc[vad]
|
||||
twilio
|
||||
python-dotenv
|
||||
@@ -1,61 +0,0 @@
|
||||
import logging
|
||||
from queue import Queue
|
||||
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from gradio_webrtc import StreamHandler, WebRTC
|
||||
|
||||
# Configure the root logger to WARNING to suppress debug messages from other libraries
|
||||
logging.basicConfig(level=logging.WARNING)
|
||||
|
||||
# Create a console handler
|
||||
console_handler = logging.StreamHandler()
|
||||
console_handler.setLevel(logging.DEBUG)
|
||||
|
||||
# Create a formatter
|
||||
formatter = logging.Formatter("%(name)s - %(levelname)s - %(message)s")
|
||||
console_handler.setFormatter(formatter)
|
||||
|
||||
# Configure the logger for your specific library
|
||||
logger = logging.getLogger("gradio_webrtc")
|
||||
logger.setLevel(logging.DEBUG)
|
||||
logger.addHandler(console_handler)
|
||||
|
||||
|
||||
class EchoHandler(StreamHandler):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.queue = Queue()
|
||||
|
||||
def receive(self, frame: tuple[int, np.ndarray] | np.ndarray) -> None:
|
||||
self.queue.put(frame)
|
||||
|
||||
def emit(self) -> None:
|
||||
return self.queue.get()
|
||||
|
||||
def copy(self) -> StreamHandler:
|
||||
return EchoHandler()
|
||||
|
||||
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Conversational AI (Powered by WebRTC ⚡️)
|
||||
</h1>
|
||||
"""
|
||||
)
|
||||
with gr.Column():
|
||||
with gr.Group():
|
||||
audio = WebRTC(
|
||||
label="Stream",
|
||||
rtc_configuration=None,
|
||||
mode="send-receive",
|
||||
modality="audio",
|
||||
)
|
||||
|
||||
audio.stream(fn=EchoHandler(), inputs=[audio], outputs=[audio], time_limit=15)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
15
demo/gemini_audio_video/README.md
Normal file
15
demo/gemini_audio_video/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: Gemini Audio Video
|
||||
emoji: ♊️
|
||||
colorFrom: purple
|
||||
colorTo: red
|
||||
sdk: gradio
|
||||
sdk_version: 5.16.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: Gemini understands audio and video!
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN, secret|GEMINI_API_KEY]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
183
demo/gemini_audio_video/app.py
Normal file
183
demo/gemini_audio_video/app.py
Normal file
@@ -0,0 +1,183 @@
|
||||
import asyncio
|
||||
import base64
|
||||
import os
|
||||
import time
|
||||
from io import BytesIO
|
||||
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from dotenv import load_dotenv
|
||||
from fastrtc import (
|
||||
AsyncAudioVideoStreamHandler,
|
||||
Stream,
|
||||
WebRTC,
|
||||
get_twilio_turn_credentials,
|
||||
)
|
||||
from google import genai
|
||||
from gradio.utils import get_space
|
||||
from PIL import Image
|
||||
|
||||
load_dotenv()
|
||||
|
||||
|
||||
def encode_audio(data: np.ndarray) -> dict:
|
||||
"""Encode Audio data to send to the server"""
|
||||
return {
|
||||
"mime_type": "audio/pcm",
|
||||
"data": base64.b64encode(data.tobytes()).decode("UTF-8"),
|
||||
}
|
||||
|
||||
|
||||
def encode_image(data: np.ndarray) -> dict:
|
||||
with BytesIO() as output_bytes:
|
||||
pil_image = Image.fromarray(data)
|
||||
pil_image.save(output_bytes, "JPEG")
|
||||
bytes_data = output_bytes.getvalue()
|
||||
base64_str = str(base64.b64encode(bytes_data), "utf-8")
|
||||
return {"mime_type": "image/jpeg", "data": base64_str}
|
||||
|
||||
|
||||
class GeminiHandler(AsyncAudioVideoStreamHandler):
|
||||
def __init__(
|
||||
self,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
"mono",
|
||||
output_sample_rate=24000,
|
||||
input_sample_rate=16000,
|
||||
)
|
||||
self.audio_queue = asyncio.Queue()
|
||||
self.video_queue = asyncio.Queue()
|
||||
self.session = None
|
||||
self.last_frame_time = 0
|
||||
self.quit = asyncio.Event()
|
||||
|
||||
def copy(self) -> "GeminiHandler":
|
||||
return GeminiHandler()
|
||||
|
||||
async def start_up(self):
|
||||
client = genai.Client(
|
||||
api_key=os.getenv("GEMINI_API_KEY"), http_options={"api_version": "v1alpha"}
|
||||
)
|
||||
config = {"response_modalities": ["AUDIO"]}
|
||||
async with client.aio.live.connect(
|
||||
model="gemini-2.0-flash-exp", config=config
|
||||
) as session:
|
||||
self.session = session
|
||||
print("set session")
|
||||
while not self.quit.is_set():
|
||||
turn = self.session.receive()
|
||||
async for response in turn:
|
||||
if data := response.data:
|
||||
audio = np.frombuffer(data, dtype=np.int16).reshape(1, -1)
|
||||
self.audio_queue.put_nowait(audio)
|
||||
|
||||
async def video_receive(self, frame: np.ndarray):
|
||||
if self.session:
|
||||
# send image every 1 second
|
||||
print(time.time() - self.last_frame_time)
|
||||
if time.time() - self.last_frame_time > 1:
|
||||
self.last_frame_time = time.time()
|
||||
await self.session.send(input=encode_image(frame))
|
||||
if self.latest_args[1] is not None:
|
||||
await self.session.send(input=encode_image(self.latest_args[1]))
|
||||
|
||||
self.video_queue.put_nowait(frame)
|
||||
|
||||
async def video_emit(self):
|
||||
return await self.video_queue.get()
|
||||
|
||||
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
|
||||
_, array = frame
|
||||
array = array.squeeze()
|
||||
audio_message = encode_audio(array)
|
||||
if self.session:
|
||||
await self.session.send(input=audio_message)
|
||||
|
||||
async def emit(self):
|
||||
array = await self.audio_queue.get()
|
||||
return (self.output_sample_rate, array)
|
||||
|
||||
async def shutdown(self) -> None:
|
||||
if self.session:
|
||||
self.quit.set()
|
||||
await self.session._websocket.close()
|
||||
self.quit.clear()
|
||||
|
||||
|
||||
stream = Stream(
|
||||
handler=GeminiHandler(),
|
||||
modality="audio-video",
|
||||
mode="send-receive",
|
||||
rtc_configuration=get_twilio_turn_credentials()
|
||||
if get_space() == "spaces"
|
||||
else None,
|
||||
time_limit=90 if get_space() else None,
|
||||
additional_inputs=[
|
||||
gr.Image(label="Image", type="numpy", sources=["upload", "clipboard"])
|
||||
],
|
||||
ui_args={
|
||||
"icon": "https://www.gstatic.com/lamda/images/gemini_favicon_f069958c85030456e93de685481c559f160ea06b.png",
|
||||
"pulse_color": "rgb(255, 255, 255)",
|
||||
"icon_button_color": "rgb(255, 255, 255)",
|
||||
"title": "Gemini Audio Video Chat",
|
||||
},
|
||||
)
|
||||
|
||||
css = """
|
||||
#video-source {max-width: 600px !important; max-height: 600 !important;}
|
||||
"""
|
||||
|
||||
with gr.Blocks(css=css) as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<div style='display: flex; align-items: center; justify-content: center; gap: 20px'>
|
||||
<div style="background-color: var(--block-background-fill); border-radius: 8px">
|
||||
<img src="https://www.gstatic.com/lamda/images/gemini_favicon_f069958c85030456e93de685481c559f160ea06b.png" style="width: 100px; height: 100px;">
|
||||
</div>
|
||||
<div>
|
||||
<h1>Gen AI SDK Voice Chat</h1>
|
||||
<p>Speak with Gemini using real-time audio + video streaming</p>
|
||||
<p>Powered by <a href="https://gradio.app/">Gradio</a> and <a href=https://freddyaboulton.github.io/gradio-webrtc/">WebRTC</a>⚡️</p>
|
||||
<p>Get an API Key <a href="https://support.google.com/googleapi/answer/6158862?hl=en">here</a></p>
|
||||
</div>
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
with gr.Row() as row:
|
||||
with gr.Column():
|
||||
webrtc = WebRTC(
|
||||
label="Video Chat",
|
||||
modality="audio-video",
|
||||
mode="send-receive",
|
||||
elem_id="video-source",
|
||||
rtc_configuration=get_twilio_turn_credentials()
|
||||
if get_space() == "spaces"
|
||||
else None,
|
||||
icon="https://www.gstatic.com/lamda/images/gemini_favicon_f069958c85030456e93de685481c559f160ea06b.png",
|
||||
pulse_color="rgb(255, 255, 255)",
|
||||
icon_button_color="rgb(255, 255, 255)",
|
||||
)
|
||||
with gr.Column():
|
||||
image_input = gr.Image(
|
||||
label="Image", type="numpy", sources=["upload", "clipboard"]
|
||||
)
|
||||
|
||||
webrtc.stream(
|
||||
GeminiHandler(),
|
||||
inputs=[webrtc, image_input],
|
||||
outputs=[webrtc],
|
||||
time_limit=60 if get_space() else None,
|
||||
concurrency_limit=2 if get_space() else None,
|
||||
)
|
||||
|
||||
stream.ui = demo
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if (mode := os.getenv("MODE")) == "UI":
|
||||
stream.ui.launch(server_port=7860)
|
||||
elif mode == "PHONE":
|
||||
raise ValueError("Phone mode not supported for this demo")
|
||||
else:
|
||||
stream.ui.launch(server_port=7860)
|
||||
4
demo/gemini_audio_video/requirements.txt
Normal file
4
demo/gemini_audio_video/requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
fastrtc
|
||||
python-dotenv
|
||||
google-genai
|
||||
twilio
|
||||
15
demo/gemini_conversation/README.md
Normal file
15
demo/gemini_conversation/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: Gemini Talking to Gemini
|
||||
emoji: ♊️
|
||||
colorFrom: purple
|
||||
colorTo: red
|
||||
sdk: gradio
|
||||
sdk_version: 5.17.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: Have two Gemini agents talk to each other
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN, secret|GEMINI_API_KEY]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
231
demo/gemini_conversation/app.py
Normal file
231
demo/gemini_conversation/app.py
Normal file
@@ -0,0 +1,231 @@
|
||||
import asyncio
|
||||
import base64
|
||||
import os
|
||||
from pathlib import Path
|
||||
from typing import AsyncGenerator
|
||||
|
||||
import librosa
|
||||
import numpy as np
|
||||
from dotenv import load_dotenv
|
||||
from fastrtc import (
|
||||
AsyncStreamHandler,
|
||||
Stream,
|
||||
get_tts_model,
|
||||
wait_for_item,
|
||||
)
|
||||
from fastrtc.utils import audio_to_int16
|
||||
from google import genai
|
||||
from google.genai.types import (
|
||||
Content,
|
||||
LiveConnectConfig,
|
||||
Part,
|
||||
PrebuiltVoiceConfig,
|
||||
SpeechConfig,
|
||||
VoiceConfig,
|
||||
)
|
||||
|
||||
load_dotenv()
|
||||
|
||||
cur_dir = Path(__file__).parent
|
||||
|
||||
SAMPLE_RATE = 24000
|
||||
|
||||
tts_model = get_tts_model()
|
||||
|
||||
|
||||
class GeminiHandler(AsyncStreamHandler):
|
||||
"""Handler for the Gemini API"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
) -> None:
|
||||
super().__init__(
|
||||
expected_layout="mono",
|
||||
output_sample_rate=24000,
|
||||
input_sample_rate=24000,
|
||||
)
|
||||
self.input_queue: asyncio.Queue = asyncio.Queue()
|
||||
self.output_queue: asyncio.Queue = asyncio.Queue()
|
||||
self.quit: asyncio.Event = asyncio.Event()
|
||||
|
||||
def copy(self) -> "GeminiHandler":
|
||||
return GeminiHandler()
|
||||
|
||||
async def start_up(self):
|
||||
voice_name = "Charon"
|
||||
client = genai.Client(
|
||||
api_key=os.getenv("GEMINI_API_KEY"),
|
||||
http_options={"api_version": "v1alpha"},
|
||||
)
|
||||
|
||||
config = LiveConnectConfig(
|
||||
response_modalities=["AUDIO"], # type: ignore
|
||||
speech_config=SpeechConfig(
|
||||
voice_config=VoiceConfig(
|
||||
prebuilt_voice_config=PrebuiltVoiceConfig(
|
||||
voice_name=voice_name,
|
||||
)
|
||||
)
|
||||
),
|
||||
system_instruction=Content(
|
||||
parts=[Part(text="You are a helpful assistant.")],
|
||||
role="system",
|
||||
),
|
||||
)
|
||||
async with client.aio.live.connect(
|
||||
model="gemini-2.0-flash-exp", config=config
|
||||
) as session:
|
||||
async for audio in session.start_stream(
|
||||
stream=self.stream(), mime_type="audio/pcm"
|
||||
):
|
||||
if audio.data:
|
||||
array = np.frombuffer(audio.data, dtype=np.int16)
|
||||
self.output_queue.put_nowait((self.output_sample_rate, array))
|
||||
|
||||
async def stream(self) -> AsyncGenerator[bytes, None]:
|
||||
while not self.quit.is_set():
|
||||
try:
|
||||
audio = await asyncio.wait_for(self.input_queue.get(), 0.1)
|
||||
yield audio
|
||||
except (asyncio.TimeoutError, TimeoutError):
|
||||
pass
|
||||
|
||||
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
|
||||
_, array = frame
|
||||
array = array.squeeze()
|
||||
audio_message = base64.b64encode(array.tobytes()).decode("UTF-8")
|
||||
self.input_queue.put_nowait(audio_message)
|
||||
|
||||
async def emit(self) -> tuple[int, np.ndarray] | None:
|
||||
return await wait_for_item(self.output_queue)
|
||||
|
||||
def shutdown(self) -> None:
|
||||
self.quit.set()
|
||||
|
||||
|
||||
class GeminiHandler2(GeminiHandler):
|
||||
async def start_up(self):
|
||||
starting_message = tts_model.tts("Can you help me make an omelette?")
|
||||
starting_message = librosa.resample(
|
||||
starting_message[1],
|
||||
orig_sr=starting_message[0],
|
||||
target_sr=self.output_sample_rate,
|
||||
)
|
||||
starting_message = audio_to_int16((self.output_sample_rate, starting_message))
|
||||
await self.output_queue.put((self.output_sample_rate, starting_message))
|
||||
voice_name = "Puck"
|
||||
client = genai.Client(
|
||||
api_key=os.getenv("GEMINI_API_KEY"),
|
||||
http_options={"api_version": "v1alpha"},
|
||||
)
|
||||
|
||||
config = LiveConnectConfig(
|
||||
response_modalities=["AUDIO"], # type: ignore
|
||||
speech_config=SpeechConfig(
|
||||
voice_config=VoiceConfig(
|
||||
prebuilt_voice_config=PrebuiltVoiceConfig(
|
||||
voice_name=voice_name,
|
||||
)
|
||||
)
|
||||
),
|
||||
system_instruction=Content(
|
||||
parts=[
|
||||
Part(
|
||||
text="You are a cooking student who wants to learn how to make an omelette."
|
||||
),
|
||||
Part(
|
||||
text="You are currently in the kitchen with a teacher who is helping you make an omelette."
|
||||
),
|
||||
Part(
|
||||
text="Please wait for the teacher to tell you what to do next. Follow the teacher's instructions carefully."
|
||||
),
|
||||
],
|
||||
role="system",
|
||||
),
|
||||
)
|
||||
async with client.aio.live.connect(
|
||||
model="gemini-2.0-flash-exp", config=config
|
||||
) as session:
|
||||
async for audio in session.start_stream(
|
||||
stream=self.stream(), mime_type="audio/pcm"
|
||||
):
|
||||
if audio.data:
|
||||
array = np.frombuffer(audio.data, dtype=np.int16)
|
||||
self.output_queue.put_nowait((self.output_sample_rate, array))
|
||||
|
||||
def copy(self) -> "GeminiHandler2":
|
||||
return GeminiHandler2()
|
||||
|
||||
|
||||
gemini_stream = Stream(
|
||||
GeminiHandler(),
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
ui_args={
|
||||
"title": "Gemini Teacher",
|
||||
"icon": "https://www.gstatic.com/lamda/images/gemini_favicon_f069958c85030456e93de685481c559f160ea06b.png",
|
||||
"pulse_color": "rgb(74, 138, 213)",
|
||||
"icon_button_color": "rgb(255, 255, 255)",
|
||||
},
|
||||
)
|
||||
|
||||
gemini_stream_2 = Stream(
|
||||
GeminiHandler2(),
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
ui_args={
|
||||
"title": "Gemini Student",
|
||||
"icon": "https://www.gstatic.com/lamda/images/gemini_favicon_f069958c85030456e93de685481c559f160ea06b.png",
|
||||
"pulse_color": "rgb(132, 112, 196)",
|
||||
"icon_button_color": "rgb(255, 255, 255)",
|
||||
},
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
import gradio as gr
|
||||
from gradio.utils import get_space
|
||||
|
||||
if not get_space():
|
||||
with gr.Blocks() as demo:
|
||||
gr.HTML(
|
||||
"""
|
||||
<div style="display: flex; justify-content: center; align-items: center;">
|
||||
<h1>Gemini Conversation</h1>
|
||||
</div>
|
||||
"""
|
||||
)
|
||||
gr.Markdown(
|
||||
"""# How to run this demo
|
||||
|
||||
- Clone the repo - top right of the page click the vertical three dots and select "Clone repository"
|
||||
- Open the repo in a terminal and install the dependencies
|
||||
- Get a gemini API key [here](https://ai.google.dev/gemini-api/docs/api-key)
|
||||
- Create a `.env` file in the root of the repo and add the following:
|
||||
```
|
||||
GEMINI_API_KEY=<your_gemini_api_key>
|
||||
```
|
||||
- Run the app with `python app.py`
|
||||
- This will print the two URLs of the agents running locally
|
||||
- Use ngrok to exponse one agent to the internet. This is so that you can acces it from your phone
|
||||
- Use the ngrok URL to access the agent from your phone
|
||||
- Now, start the "teacher gemini" agent first. Then, start the "student gemini" agent. The student gemini will start talking to the teacher gemini. And the teacher gemini will respond!
|
||||
|
||||
Important:
|
||||
- Make sure the audio sources are not too close to each other or too loud. Sometimes that causes them to talk over each other..
|
||||
- Feel free to modify the `system_instruction` to change the behavior of the agents.
|
||||
- You can also modify the `voice_name` to change the voice of the agents.
|
||||
- Have fun!
|
||||
"""
|
||||
)
|
||||
demo.launch()
|
||||
|
||||
import time
|
||||
|
||||
_ = gemini_stream.ui.launch(server_port=7860, prevent_thread_lock=True)
|
||||
_ = gemini_stream_2.ui.launch(server_port=7861, prevent_thread_lock=True)
|
||||
try:
|
||||
while True:
|
||||
time.sleep(1)
|
||||
except KeyboardInterrupt:
|
||||
gemini_stream.ui.close()
|
||||
gemini_stream_2.ui.close()
|
||||
15
demo/hello_computer/README.md
Normal file
15
demo/hello_computer/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: Hello Computer
|
||||
emoji: 💻
|
||||
colorFrom: purple
|
||||
colorTo: red
|
||||
sdk: gradio
|
||||
sdk_version: 5.16.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: Say computer before asking your question
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN, secret|SAMBANOVA_API_KEY]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
15
demo/hello_computer/README_gradio.md
Normal file
15
demo/hello_computer/README_gradio.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: Hello Computer (Gradio)
|
||||
emoji: 💻
|
||||
colorFrom: purple
|
||||
colorTo: red
|
||||
sdk: gradio
|
||||
sdk_version: 5.16.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: Say computer (Gradio)
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN, secret|SAMBANOVA_API_KEY]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
145
demo/hello_computer/app.py
Normal file
145
demo/hello_computer/app.py
Normal file
@@ -0,0 +1,145 @@
|
||||
import base64
|
||||
import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
import gradio as gr
|
||||
import huggingface_hub
|
||||
import numpy as np
|
||||
from dotenv import load_dotenv
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import HTMLResponse, StreamingResponse
|
||||
from fastrtc import (
|
||||
AdditionalOutputs,
|
||||
ReplyOnStopWords,
|
||||
Stream,
|
||||
get_stt_model,
|
||||
get_twilio_turn_credentials,
|
||||
)
|
||||
from gradio.utils import get_space
|
||||
from pydantic import BaseModel
|
||||
|
||||
load_dotenv()
|
||||
|
||||
curr_dir = Path(__file__).parent
|
||||
|
||||
|
||||
client = huggingface_hub.InferenceClient(
|
||||
api_key=os.environ.get("SAMBANOVA_API_KEY"),
|
||||
provider="sambanova",
|
||||
)
|
||||
model = get_stt_model()
|
||||
|
||||
|
||||
def response(
|
||||
audio: tuple[int, np.ndarray],
|
||||
gradio_chatbot: list[dict] | None = None,
|
||||
conversation_state: list[dict] | None = None,
|
||||
):
|
||||
gradio_chatbot = gradio_chatbot or []
|
||||
conversation_state = conversation_state or []
|
||||
text = model.stt(audio)
|
||||
print("STT in handler", text)
|
||||
sample_rate, array = audio
|
||||
gradio_chatbot.append(
|
||||
{"role": "user", "content": gr.Audio((sample_rate, array.squeeze()))}
|
||||
)
|
||||
yield AdditionalOutputs(gradio_chatbot, conversation_state)
|
||||
|
||||
conversation_state.append({"role": "user", "content": text})
|
||||
|
||||
request = client.chat.completions.create(
|
||||
model="meta-llama/Llama-3.2-3B-Instruct",
|
||||
messages=conversation_state, # type: ignore
|
||||
temperature=0.1,
|
||||
top_p=0.1,
|
||||
)
|
||||
response = {"role": "assistant", "content": request.choices[0].message.content}
|
||||
|
||||
conversation_state.append(response)
|
||||
gradio_chatbot.append(response)
|
||||
|
||||
yield AdditionalOutputs(gradio_chatbot, conversation_state)
|
||||
|
||||
|
||||
chatbot = gr.Chatbot(type="messages", value=[])
|
||||
state = gr.State(value=[])
|
||||
stream = Stream(
|
||||
ReplyOnStopWords(
|
||||
response, # type: ignore
|
||||
stop_words=["computer"],
|
||||
input_sample_rate=16000,
|
||||
),
|
||||
mode="send",
|
||||
modality="audio",
|
||||
additional_inputs=[chatbot, state],
|
||||
additional_outputs=[chatbot, state],
|
||||
additional_outputs_handler=lambda *a: (a[2], a[3]),
|
||||
concurrency_limit=5 if get_space() else None,
|
||||
time_limit=90 if get_space() else None,
|
||||
rtc_configuration=get_twilio_turn_credentials() if get_space() else None,
|
||||
)
|
||||
app = FastAPI()
|
||||
stream.mount(app)
|
||||
|
||||
|
||||
class Message(BaseModel):
|
||||
role: str
|
||||
content: str
|
||||
|
||||
|
||||
class InputData(BaseModel):
|
||||
webrtc_id: str
|
||||
chatbot: list[Message]
|
||||
state: list[Message]
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def _():
|
||||
rtc_config = get_twilio_turn_credentials() if get_space() else None
|
||||
html_content = (curr_dir / "index.html").read_text()
|
||||
html_content = html_content.replace("__RTC_CONFIGURATION__", json.dumps(rtc_config))
|
||||
return HTMLResponse(content=html_content)
|
||||
|
||||
|
||||
@app.post("/input_hook")
|
||||
async def _(data: InputData):
|
||||
body = data.model_dump()
|
||||
stream.set_input(data.webrtc_id, body["chatbot"], body["state"])
|
||||
|
||||
|
||||
def audio_to_base64(file_path):
|
||||
audio_format = "wav"
|
||||
with open(file_path, "rb") as audio_file:
|
||||
encoded_audio = base64.b64encode(audio_file.read()).decode("utf-8")
|
||||
return f"data:audio/{audio_format};base64,{encoded_audio}"
|
||||
|
||||
|
||||
@app.get("/outputs")
|
||||
async def _(webrtc_id: str):
|
||||
async def output_stream():
|
||||
async for output in stream.output_stream(webrtc_id):
|
||||
chatbot = output.args[0]
|
||||
state = output.args[1]
|
||||
data = {
|
||||
"message": state[-1],
|
||||
"audio": audio_to_base64(chatbot[-1]["content"].value["path"])
|
||||
if chatbot[-1]["role"] == "user"
|
||||
else None,
|
||||
}
|
||||
yield f"event: output\ndata: {json.dumps(data)}\n\n"
|
||||
|
||||
return StreamingResponse(output_stream(), media_type="text/event-stream")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import os
|
||||
|
||||
if (mode := os.getenv("MODE")) == "UI":
|
||||
stream.ui.launch(server_port=7860)
|
||||
elif mode == "PHONE":
|
||||
raise ValueError("Phone mode not supported")
|
||||
else:
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run(app, host="0.0.0.0", port=7860)
|
||||
486
demo/hello_computer/index.html
Normal file
486
demo/hello_computer/index.html
Normal file
@@ -0,0 +1,486 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Hello Computer 💻</title>
|
||||
<style>
|
||||
body {
|
||||
font-family: -apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, sans-serif;
|
||||
background-color: #f8f9fa;
|
||||
color: #1a1a1a;
|
||||
margin: 0;
|
||||
padding: 20px;
|
||||
height: 100vh;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
|
||||
.container {
|
||||
max-width: 800px;
|
||||
margin: 0 auto;
|
||||
height: calc(100% - 100px);
|
||||
}
|
||||
|
||||
.logo {
|
||||
text-align: center;
|
||||
margin-bottom: 40px;
|
||||
}
|
||||
|
||||
.chat-container {
|
||||
background: white;
|
||||
border-radius: 8px;
|
||||
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
|
||||
padding: 20px;
|
||||
height: 90%;
|
||||
box-sizing: border-box;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
}
|
||||
|
||||
.chat-messages {
|
||||
flex-grow: 1;
|
||||
overflow-y: auto;
|
||||
margin-bottom: 20px;
|
||||
padding: 10px;
|
||||
}
|
||||
|
||||
.message {
|
||||
margin-bottom: 20px;
|
||||
padding: 12px;
|
||||
border-radius: 8px;
|
||||
font-size: 14px;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.message.user {
|
||||
background-color: #e9ecef;
|
||||
margin-left: 20%;
|
||||
}
|
||||
|
||||
.message.assistant {
|
||||
background-color: #f1f3f5;
|
||||
margin-right: 20%;
|
||||
}
|
||||
|
||||
.controls {
|
||||
text-align: center;
|
||||
margin-top: 20px;
|
||||
}
|
||||
|
||||
button {
|
||||
background-color: #0066cc;
|
||||
color: white;
|
||||
border: none;
|
||||
padding: 12px 24px;
|
||||
font-family: inherit;
|
||||
font-size: 14px;
|
||||
cursor: pointer;
|
||||
transition: all 0.3s;
|
||||
border-radius: 4px;
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
button:hover {
|
||||
background-color: #0052a3;
|
||||
}
|
||||
|
||||
#audio-output {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.icon-with-spinner {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: 12px;
|
||||
min-width: 180px;
|
||||
}
|
||||
|
||||
.spinner {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
border: 2px solid #ffffff;
|
||||
border-top-color: transparent;
|
||||
border-radius: 50%;
|
||||
animation: spin 1s linear infinite;
|
||||
flex-shrink: 0;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
to {
|
||||
transform: rotate(360deg);
|
||||
}
|
||||
}
|
||||
|
||||
.pulse-container {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
gap: 12px;
|
||||
min-width: 180px;
|
||||
}
|
||||
|
||||
.pulse-circle {
|
||||
width: 20px;
|
||||
height: 20px;
|
||||
border-radius: 50%;
|
||||
background-color: #ffffff;
|
||||
opacity: 0.2;
|
||||
flex-shrink: 0;
|
||||
transform: translateX(-0%) scale(var(--audio-level, 1));
|
||||
transition: transform 0.1s ease;
|
||||
}
|
||||
|
||||
/* Add styles for typing indicator */
|
||||
.typing-indicator {
|
||||
padding: 8px;
|
||||
background-color: #f1f3f5;
|
||||
border-radius: 8px;
|
||||
margin-bottom: 10px;
|
||||
display: none;
|
||||
}
|
||||
|
||||
.dots {
|
||||
display: inline-flex;
|
||||
gap: 4px;
|
||||
}
|
||||
|
||||
.dot {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
background-color: #0066cc;
|
||||
border-radius: 50%;
|
||||
animation: pulse 1.5s infinite;
|
||||
opacity: 0.5;
|
||||
}
|
||||
|
||||
.dot:nth-child(2) {
|
||||
animation-delay: 0.5s;
|
||||
}
|
||||
|
||||
.dot:nth-child(3) {
|
||||
animation-delay: 1s;
|
||||
}
|
||||
|
||||
@keyframes pulse {
|
||||
|
||||
0%,
|
||||
100% {
|
||||
opacity: 0.5;
|
||||
transform: scale(1);
|
||||
}
|
||||
|
||||
50% {
|
||||
opacity: 1;
|
||||
transform: scale(1.2);
|
||||
}
|
||||
}
|
||||
|
||||
/* Add styles for toast notifications */
|
||||
.toast {
|
||||
position: fixed;
|
||||
top: 20px;
|
||||
left: 50%;
|
||||
transform: translateX(-50%);
|
||||
padding: 16px 24px;
|
||||
border-radius: 4px;
|
||||
font-size: 14px;
|
||||
z-index: 1000;
|
||||
display: none;
|
||||
box-shadow: 0 2px 5px rgba(0, 0, 0, 0.2);
|
||||
}
|
||||
|
||||
.toast.error {
|
||||
background-color: #f44336;
|
||||
color: white;
|
||||
}
|
||||
|
||||
.toast.warning {
|
||||
background-color: #ffd700;
|
||||
color: black;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
|
||||
<body>
|
||||
<!-- Add toast element after body opening tag -->
|
||||
<div id="error-toast" class="toast"></div>
|
||||
<div class="container">
|
||||
<div class="logo">
|
||||
<h1>Hello Computer 💻</h1>
|
||||
<h2 style="font-size: 1.2em; color: #666; margin-top: 10px;">Say 'Computer' before asking your question</h2>
|
||||
</div>
|
||||
<div class="chat-container">
|
||||
<div class="chat-messages" id="chat-messages"></div>
|
||||
<div class="typing-indicator" id="typing-indicator">
|
||||
<div class="dots">
|
||||
<div class="dot"></div>
|
||||
<div class="dot"></div>
|
||||
<div class="dot"></div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
<div class="controls">
|
||||
<button id="start-button">Start Conversation</button>
|
||||
</div>
|
||||
</div>
|
||||
<audio id="audio-output"></audio>
|
||||
|
||||
<script>
|
||||
let peerConnection;
|
||||
let webrtc_id;
|
||||
const startButton = document.getElementById('start-button');
|
||||
const chatMessages = document.getElementById('chat-messages');
|
||||
|
||||
let audioLevel = 0;
|
||||
let animationFrame;
|
||||
let audioContext, analyser, audioSource;
|
||||
let messages = [];
|
||||
let eventSource;
|
||||
|
||||
function updateButtonState() {
|
||||
const button = document.getElementById('start-button');
|
||||
if (peerConnection && (peerConnection.connectionState === 'connecting' || peerConnection.connectionState === 'new')) {
|
||||
button.innerHTML = `
|
||||
<div class="icon-with-spinner">
|
||||
<div class="spinner"></div>
|
||||
<span>Connecting...</span>
|
||||
</div>
|
||||
`;
|
||||
} else if (peerConnection && peerConnection.connectionState === 'connected') {
|
||||
button.innerHTML = `
|
||||
<div class="pulse-container">
|
||||
<div class="pulse-circle"></div>
|
||||
<span>Stop Conversation</span>
|
||||
</div>
|
||||
`;
|
||||
} else {
|
||||
button.innerHTML = 'Start Conversation';
|
||||
}
|
||||
}
|
||||
|
||||
function setupAudioVisualization(stream) {
|
||||
audioContext = new (window.AudioContext || window.webkitAudioContext)();
|
||||
analyser = audioContext.createAnalyser();
|
||||
audioSource = audioContext.createMediaStreamSource(stream);
|
||||
audioSource.connect(analyser);
|
||||
analyser.fftSize = 64;
|
||||
const dataArray = new Uint8Array(analyser.frequencyBinCount);
|
||||
|
||||
function updateAudioLevel() {
|
||||
analyser.getByteFrequencyData(dataArray);
|
||||
const average = Array.from(dataArray).reduce((a, b) => a + b, 0) / dataArray.length;
|
||||
audioLevel = average / 255;
|
||||
|
||||
const pulseCircle = document.querySelector('.pulse-circle');
|
||||
if (pulseCircle) {
|
||||
pulseCircle.style.setProperty('--audio-level', 1 + audioLevel);
|
||||
}
|
||||
|
||||
animationFrame = requestAnimationFrame(updateAudioLevel);
|
||||
}
|
||||
updateAudioLevel();
|
||||
}
|
||||
|
||||
function showError(message) {
|
||||
const toast = document.getElementById('error-toast');
|
||||
toast.textContent = message;
|
||||
toast.className = 'toast error';
|
||||
toast.style.display = 'block';
|
||||
|
||||
// Hide toast after 5 seconds
|
||||
setTimeout(() => {
|
||||
toast.style.display = 'none';
|
||||
}, 5000);
|
||||
}
|
||||
|
||||
function handleMessage(event) {
|
||||
const eventJson = JSON.parse(event.data);
|
||||
const typingIndicator = document.getElementById('typing-indicator');
|
||||
|
||||
if (eventJson.type === "error") {
|
||||
showError(eventJson.message);
|
||||
} else if (eventJson.type === "send_input") {
|
||||
fetch('/input_hook', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
},
|
||||
body: JSON.stringify({
|
||||
webrtc_id: webrtc_id,
|
||||
chatbot: messages,
|
||||
state: messages
|
||||
})
|
||||
});
|
||||
} else if (eventJson.type === "log") {
|
||||
if (eventJson.data === "pause_detected") {
|
||||
typingIndicator.style.display = 'block';
|
||||
chatMessages.scrollTop = chatMessages.scrollHeight;
|
||||
} else if (eventJson.data === "response_starting") {
|
||||
typingIndicator.style.display = 'none';
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function setupWebRTC() {
|
||||
const config = __RTC_CONFIGURATION__;
|
||||
peerConnection = new RTCPeerConnection(config);
|
||||
|
||||
const timeoutId = setTimeout(() => {
|
||||
const toast = document.getElementById('error-toast');
|
||||
toast.textContent = "Connection is taking longer than usual. Are you on a VPN?";
|
||||
toast.className = 'toast warning';
|
||||
toast.style.display = 'block';
|
||||
|
||||
// Hide warning after 5 seconds
|
||||
setTimeout(() => {
|
||||
toast.style.display = 'none';
|
||||
}, 5000);
|
||||
}, 5000);
|
||||
|
||||
try {
|
||||
const stream = await navigator.mediaDevices.getUserMedia({
|
||||
audio: true
|
||||
});
|
||||
|
||||
setupAudioVisualization(stream);
|
||||
|
||||
stream.getTracks().forEach(track => {
|
||||
peerConnection.addTrack(track, stream);
|
||||
});
|
||||
|
||||
const dataChannel = peerConnection.createDataChannel('text');
|
||||
dataChannel.onmessage = handleMessage;
|
||||
|
||||
const offer = await peerConnection.createOffer();
|
||||
await peerConnection.setLocalDescription(offer);
|
||||
|
||||
await new Promise((resolve) => {
|
||||
if (peerConnection.iceGatheringState === "complete") {
|
||||
resolve();
|
||||
} else {
|
||||
const checkState = () => {
|
||||
if (peerConnection.iceGatheringState === "complete") {
|
||||
peerConnection.removeEventListener("icegatheringstatechange", checkState);
|
||||
resolve();
|
||||
}
|
||||
};
|
||||
peerConnection.addEventListener("icegatheringstatechange", checkState);
|
||||
}
|
||||
});
|
||||
|
||||
peerConnection.addEventListener('connectionstatechange', () => {
|
||||
console.log('connectionstatechange', peerConnection.connectionState);
|
||||
if (peerConnection.connectionState === 'connected') {
|
||||
clearTimeout(timeoutId);
|
||||
const toast = document.getElementById('error-toast');
|
||||
toast.style.display = 'none';
|
||||
}
|
||||
updateButtonState();
|
||||
});
|
||||
|
||||
webrtc_id = Math.random().toString(36).substring(7);
|
||||
|
||||
const response = await fetch('/webrtc/offer', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
sdp: peerConnection.localDescription.sdp,
|
||||
type: peerConnection.localDescription.type,
|
||||
webrtc_id: webrtc_id
|
||||
})
|
||||
});
|
||||
|
||||
const serverResponse = await response.json();
|
||||
|
||||
if (serverResponse.status === 'failed') {
|
||||
showError(serverResponse.meta.error === 'concurrency_limit_reached'
|
||||
? `Too many connections. Maximum limit is ${serverResponse.meta.limit}`
|
||||
: serverResponse.meta.error);
|
||||
stop();
|
||||
return;
|
||||
}
|
||||
|
||||
await peerConnection.setRemoteDescription(serverResponse);
|
||||
|
||||
eventSource = new EventSource('/outputs?webrtc_id=' + webrtc_id);
|
||||
eventSource.addEventListener("output", (event) => {
|
||||
const eventJson = JSON.parse(event.data);
|
||||
console.log(eventJson);
|
||||
messages.push(eventJson.message);
|
||||
addMessage(eventJson.message.role, eventJson.audio ?? eventJson.message.content);
|
||||
});
|
||||
} catch (err) {
|
||||
clearTimeout(timeoutId);
|
||||
console.error('Error setting up WebRTC:', err);
|
||||
showError('Failed to establish connection. Please try again.');
|
||||
stop();
|
||||
}
|
||||
}
|
||||
|
||||
function addMessage(role, content) {
|
||||
const messageDiv = document.createElement('div');
|
||||
messageDiv.classList.add('message', role);
|
||||
|
||||
if (role === 'user') {
|
||||
// Create audio element for user messages
|
||||
const audio = document.createElement('audio');
|
||||
audio.controls = true;
|
||||
audio.src = content;
|
||||
messageDiv.appendChild(audio);
|
||||
} else {
|
||||
// Text content for assistant messages
|
||||
messageDiv.textContent = content;
|
||||
}
|
||||
|
||||
chatMessages.appendChild(messageDiv);
|
||||
chatMessages.scrollTop = chatMessages.scrollHeight;
|
||||
}
|
||||
|
||||
function stop() {
|
||||
if (eventSource) {
|
||||
eventSource.close();
|
||||
eventSource = null;
|
||||
}
|
||||
|
||||
if (animationFrame) {
|
||||
cancelAnimationFrame(animationFrame);
|
||||
}
|
||||
if (audioContext) {
|
||||
audioContext.close();
|
||||
audioContext = null;
|
||||
analyser = null;
|
||||
audioSource = null;
|
||||
}
|
||||
if (peerConnection) {
|
||||
if (peerConnection.getTransceivers) {
|
||||
peerConnection.getTransceivers().forEach(transceiver => {
|
||||
if (transceiver.stop) {
|
||||
transceiver.stop();
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
if (peerConnection.getSenders) {
|
||||
peerConnection.getSenders().forEach(sender => {
|
||||
if (sender.track && sender.track.stop) sender.track.stop();
|
||||
});
|
||||
}
|
||||
peerConnection.close();
|
||||
}
|
||||
updateButtonState();
|
||||
audioLevel = 0;
|
||||
}
|
||||
|
||||
startButton.addEventListener('click', () => {
|
||||
if (!peerConnection || peerConnection.connectionState !== 'connected') {
|
||||
setupWebRTC();
|
||||
} else {
|
||||
stop();
|
||||
}
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
|
||||
</html>
|
||||
4
demo/hello_computer/requirements.txt
Normal file
4
demo/hello_computer/requirements.txt
Normal file
@@ -0,0 +1,4 @@
|
||||
fastrtc[stopword]
|
||||
python-dotenv
|
||||
huggingface_hub>=0.29.0
|
||||
twilio
|
||||
16
demo/llama_code_editor/README.md
Normal file
16
demo/llama_code_editor/README.md
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
title: Llama Code Editor
|
||||
emoji: 🦙
|
||||
colorFrom: indigo
|
||||
colorTo: pink
|
||||
sdk: gradio
|
||||
sdk_version: 5.16.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: Create interactive HTML web pages with your voice
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN,
|
||||
secret|SAMBANOVA_API_KEY, secret|GROQ_API_KEY]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
45
demo/llama_code_editor/app.py
Normal file
45
demo/llama_code_editor/app.py
Normal file
@@ -0,0 +1,45 @@
|
||||
from fastapi import FastAPI
|
||||
from fastapi.responses import RedirectResponse
|
||||
from fastrtc import Stream
|
||||
from gradio.utils import get_space
|
||||
|
||||
try:
|
||||
from demo.llama_code_editor.handler import (
|
||||
CodeHandler,
|
||||
)
|
||||
from demo.llama_code_editor.ui import demo as ui
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
from handler import CodeHandler
|
||||
from ui import demo as ui
|
||||
|
||||
|
||||
stream = Stream(
|
||||
handler=CodeHandler,
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
concurrency_limit=10 if get_space() else None,
|
||||
time_limit=90 if get_space() else None,
|
||||
)
|
||||
|
||||
stream.ui = ui
|
||||
|
||||
app = FastAPI()
|
||||
|
||||
|
||||
@app.get("/")
|
||||
async def _():
|
||||
url = "/ui" if not get_space() else "https://fastrtc-llama-code-editor.hf.space/ui/"
|
||||
return RedirectResponse(url)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import os
|
||||
|
||||
if (mode := os.getenv("MODE")) == "UI":
|
||||
stream.ui.launch(server_port=7860, server_name="0.0.0.0")
|
||||
elif mode == "PHONE":
|
||||
stream.fastphone(host="0.0.0.0", port=7860)
|
||||
else:
|
||||
import uvicorn
|
||||
|
||||
uvicorn.run(app, host="0.0.0.0", port=7860)
|
||||
37
demo/llama_code_editor/assets/sandbox.html
Normal file
37
demo/llama_code_editor/assets/sandbox.html
Normal file
@@ -0,0 +1,37 @@
|
||||
<div style="
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-height: 400px;
|
||||
background: linear-gradient(135deg, #f5f7fa 0%, #e4e8ec 100%);
|
||||
border-radius: 8px;
|
||||
border: 2px dashed #cbd5e1;
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
color: #64748b;
|
||||
font-family: system-ui, -apple-system, sans-serif;
|
||||
">
|
||||
<div style="
|
||||
width: 80px;
|
||||
height: 80px;
|
||||
margin-bottom: 1.5rem;
|
||||
border: 3px solid #cbd5e1;
|
||||
border-radius: 12px;
|
||||
position: relative;
|
||||
">
|
||||
<div style="
|
||||
position: absolute;
|
||||
top: 50%;
|
||||
left: 50%;
|
||||
transform: translate(-50%, -50%);
|
||||
font-size: 2rem;
|
||||
">📦</div>
|
||||
</div>
|
||||
<h2 style="
|
||||
margin: 0 0 0.5rem 0;
|
||||
font-size: 1.5rem;
|
||||
font-weight: 600;
|
||||
color: #475569;
|
||||
">No Application Created</h2>
|
||||
</div>
|
||||
60
demo/llama_code_editor/assets/spinner.html
Normal file
60
demo/llama_code_editor/assets/spinner.html
Normal file
@@ -0,0 +1,60 @@
|
||||
<div style="
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
min-height: 400px;
|
||||
background: linear-gradient(135deg, #f8fafc 0%, #f1f5f9 100%);
|
||||
border-radius: 8px;
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
font-family: system-ui, -apple-system, sans-serif;
|
||||
">
|
||||
<!-- Spinner container -->
|
||||
<div style="
|
||||
position: relative;
|
||||
width: 64px;
|
||||
height: 64px;
|
||||
margin-bottom: 1.5rem;
|
||||
">
|
||||
<!-- Static ring -->
|
||||
<div style="
|
||||
position: absolute;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
border: 4px solid #e2e8f0;
|
||||
border-radius: 50%;
|
||||
"></div>
|
||||
<!-- Animated spinner -->
|
||||
<div style="
|
||||
position: absolute;
|
||||
width: 100%;
|
||||
height: 100%;
|
||||
border: 4px solid transparent;
|
||||
border-top-color: #3b82f6;
|
||||
border-radius: 50%;
|
||||
animation: spin 1s linear infinite;
|
||||
"></div>
|
||||
</div>
|
||||
|
||||
<!-- Text content -->
|
||||
<h2 style="
|
||||
margin: 0 0 0.5rem 0;
|
||||
font-size: 1.25rem;
|
||||
font-weight: 600;
|
||||
color: #475569;
|
||||
">Generating your application...</h2>
|
||||
|
||||
<p style="
|
||||
margin: 0;
|
||||
font-size: 0.875rem;
|
||||
color: #64748b;
|
||||
">This may take a few moments</p>
|
||||
|
||||
<style>
|
||||
@keyframes spin {
|
||||
0% { transform: rotate(0deg); }
|
||||
100% { transform: rotate(360deg); }
|
||||
}
|
||||
</style>
|
||||
</div>
|
||||
73
demo/llama_code_editor/handler.py
Normal file
73
demo/llama_code_editor/handler.py
Normal file
@@ -0,0 +1,73 @@
|
||||
import base64
|
||||
import os
|
||||
import re
|
||||
from pathlib import Path
|
||||
|
||||
import numpy as np
|
||||
import openai
|
||||
from dotenv import load_dotenv
|
||||
from fastrtc import (
|
||||
AdditionalOutputs,
|
||||
ReplyOnPause,
|
||||
audio_to_bytes,
|
||||
)
|
||||
from groq import Groq
|
||||
|
||||
load_dotenv()
|
||||
|
||||
groq_client = Groq(api_key=os.environ.get("GROQ_API_KEY"))
|
||||
|
||||
client = openai.OpenAI(
|
||||
api_key=os.environ.get("SAMBANOVA_API_KEY"),
|
||||
base_url="https://api.sambanova.ai/v1",
|
||||
)
|
||||
|
||||
path = Path(__file__).parent / "assets"
|
||||
|
||||
spinner_html = open(path / "spinner.html").read()
|
||||
|
||||
|
||||
system_prompt = "You are an AI coding assistant. Your task is to write single-file HTML applications based on a user's request. Only return the necessary code. Include all necessary imports and styles. You may also be asked to edit your original response."
|
||||
user_prompt = "Please write a single-file HTML application to fulfill the following request.\nThe message:{user_message}\nCurrent code you have written:{code}"
|
||||
|
||||
|
||||
def extract_html_content(text):
|
||||
"""
|
||||
Extract content including HTML tags.
|
||||
"""
|
||||
match = re.search(r"<!DOCTYPE html>.*?</html>", text, re.DOTALL)
|
||||
return match.group(0) if match else None
|
||||
|
||||
|
||||
def display_in_sandbox(code):
|
||||
encoded_html = base64.b64encode(code.encode("utf-8")).decode("utf-8")
|
||||
data_uri = f"data:text/html;charset=utf-8;base64,{encoded_html}"
|
||||
return f'<iframe src="{data_uri}" width="100%" height="600px"></iframe>'
|
||||
|
||||
|
||||
def generate(user_message: tuple[int, np.ndarray], history: list[dict], code: str):
|
||||
yield AdditionalOutputs(history, spinner_html)
|
||||
|
||||
text = groq_client.audio.transcriptions.create(
|
||||
file=("audio-file.mp3", audio_to_bytes(user_message)),
|
||||
model="whisper-large-v3-turbo",
|
||||
response_format="verbose_json",
|
||||
).text
|
||||
|
||||
user_msg_formatted = user_prompt.format(user_message=text, code=code)
|
||||
history.append({"role": "user", "content": user_msg_formatted})
|
||||
|
||||
response = client.chat.completions.create(
|
||||
model="Meta-Llama-3.1-70B-Instruct",
|
||||
messages=history, # type: ignore
|
||||
temperature=0.1,
|
||||
top_p=0.1,
|
||||
)
|
||||
|
||||
output = response.choices[0].message.content
|
||||
html_code = extract_html_content(output)
|
||||
history.append({"role": "assistant", "content": output})
|
||||
yield AdditionalOutputs(history, html_code)
|
||||
|
||||
|
||||
CodeHandler = ReplyOnPause(generate) # type: ignore
|
||||
5
demo/llama_code_editor/requirements.in
Normal file
5
demo/llama_code_editor/requirements.in
Normal file
@@ -0,0 +1,5 @@
|
||||
fastrtc[vad]
|
||||
groq
|
||||
openai
|
||||
python-dotenv
|
||||
twilio
|
||||
295
demo/llama_code_editor/requirements.txt
Normal file
295
demo/llama_code_editor/requirements.txt
Normal file
@@ -0,0 +1,295 @@
|
||||
# This file was autogenerated by uv via the following command:
|
||||
# uv pip compile demo/llama_code_editor/requirements.in -o demo/llama_code_editor/requirements.txt
|
||||
aiofiles==23.2.1
|
||||
# via gradio
|
||||
aiohappyeyeballs==2.4.6
|
||||
# via aiohttp
|
||||
aiohttp==3.11.12
|
||||
# via
|
||||
# aiohttp-retry
|
||||
# twilio
|
||||
aiohttp-retry==2.9.1
|
||||
# via twilio
|
||||
aioice==0.9.0
|
||||
# via aiortc
|
||||
aiortc==1.10.1
|
||||
# via fastrtc
|
||||
aiosignal==1.3.2
|
||||
# via aiohttp
|
||||
annotated-types==0.7.0
|
||||
# via pydantic
|
||||
anyio==4.6.2.post1
|
||||
# via
|
||||
# gradio
|
||||
# groq
|
||||
# httpx
|
||||
# openai
|
||||
# starlette
|
||||
attrs==25.1.0
|
||||
# via aiohttp
|
||||
audioread==3.0.1
|
||||
# via librosa
|
||||
av==12.3.0
|
||||
# via aiortc
|
||||
certifi==2024.8.30
|
||||
# via
|
||||
# httpcore
|
||||
# httpx
|
||||
# requests
|
||||
cffi==1.17.1
|
||||
# via
|
||||
# aiortc
|
||||
# cryptography
|
||||
# pylibsrtp
|
||||
# soundfile
|
||||
charset-normalizer==3.4.0
|
||||
# via requests
|
||||
click==8.1.7
|
||||
# via
|
||||
# typer
|
||||
# uvicorn
|
||||
coloredlogs==15.0.1
|
||||
# via onnxruntime
|
||||
cryptography==43.0.3
|
||||
# via
|
||||
# aiortc
|
||||
# pyopenssl
|
||||
decorator==5.1.1
|
||||
# via librosa
|
||||
distro==1.9.0
|
||||
# via
|
||||
# groq
|
||||
# openai
|
||||
dnspython==2.7.0
|
||||
# via aioice
|
||||
fastapi==0.115.5
|
||||
# via gradio
|
||||
fastrtc==0.0.2.post4
|
||||
# via -r demo/llama_code_editor/requirements.in
|
||||
ffmpy==0.4.0
|
||||
# via gradio
|
||||
filelock==3.16.1
|
||||
# via huggingface-hub
|
||||
flatbuffers==24.3.25
|
||||
# via onnxruntime
|
||||
frozenlist==1.5.0
|
||||
# via
|
||||
# aiohttp
|
||||
# aiosignal
|
||||
fsspec==2024.10.0
|
||||
# via
|
||||
# gradio-client
|
||||
# huggingface-hub
|
||||
google-crc32c==1.6.0
|
||||
# via aiortc
|
||||
gradio==5.16.0
|
||||
# via fastrtc
|
||||
gradio-client==1.7.0
|
||||
# via gradio
|
||||
groq==0.18.0
|
||||
# via -r demo/llama_code_editor/requirements.in
|
||||
h11==0.14.0
|
||||
# via
|
||||
# httpcore
|
||||
# uvicorn
|
||||
httpcore==1.0.7
|
||||
# via httpx
|
||||
httpx==0.27.2
|
||||
# via
|
||||
# gradio
|
||||
# gradio-client
|
||||
# groq
|
||||
# openai
|
||||
# safehttpx
|
||||
huggingface-hub==0.28.1
|
||||
# via
|
||||
# gradio
|
||||
# gradio-client
|
||||
humanfriendly==10.0
|
||||
# via coloredlogs
|
||||
idna==3.10
|
||||
# via
|
||||
# anyio
|
||||
# httpx
|
||||
# requests
|
||||
# yarl
|
||||
ifaddr==0.2.0
|
||||
# via aioice
|
||||
jinja2==3.1.4
|
||||
# via gradio
|
||||
jiter==0.7.1
|
||||
# via openai
|
||||
joblib==1.4.2
|
||||
# via
|
||||
# librosa
|
||||
# scikit-learn
|
||||
lazy-loader==0.4
|
||||
# via librosa
|
||||
librosa==0.10.2.post1
|
||||
# via fastrtc
|
||||
llvmlite==0.43.0
|
||||
# via numba
|
||||
markdown-it-py==3.0.0
|
||||
# via rich
|
||||
markupsafe==2.1.5
|
||||
# via
|
||||
# gradio
|
||||
# jinja2
|
||||
mdurl==0.1.2
|
||||
# via markdown-it-py
|
||||
mpmath==1.3.0
|
||||
# via sympy
|
||||
msgpack==1.1.0
|
||||
# via librosa
|
||||
multidict==6.1.0
|
||||
# via
|
||||
# aiohttp
|
||||
# yarl
|
||||
numba==0.60.0
|
||||
# via librosa
|
||||
numpy==2.0.2
|
||||
# via
|
||||
# gradio
|
||||
# librosa
|
||||
# numba
|
||||
# onnxruntime
|
||||
# pandas
|
||||
# scikit-learn
|
||||
# scipy
|
||||
# soxr
|
||||
onnxruntime==1.20.1
|
||||
# via fastrtc
|
||||
openai==1.54.4
|
||||
# via -r demo/llama_code_editor/requirements.in
|
||||
orjson==3.10.11
|
||||
# via gradio
|
||||
packaging==24.2
|
||||
# via
|
||||
# gradio
|
||||
# gradio-client
|
||||
# huggingface-hub
|
||||
# lazy-loader
|
||||
# onnxruntime
|
||||
# pooch
|
||||
pandas==2.2.3
|
||||
# via gradio
|
||||
pillow==11.0.0
|
||||
# via gradio
|
||||
platformdirs==4.3.6
|
||||
# via pooch
|
||||
pooch==1.8.2
|
||||
# via librosa
|
||||
propcache==0.2.1
|
||||
# via
|
||||
# aiohttp
|
||||
# yarl
|
||||
protobuf==5.28.3
|
||||
# via onnxruntime
|
||||
pycparser==2.22
|
||||
# via cffi
|
||||
pydantic==2.9.2
|
||||
# via
|
||||
# fastapi
|
||||
# gradio
|
||||
# groq
|
||||
# openai
|
||||
pydantic-core==2.23.4
|
||||
# via pydantic
|
||||
pydub==0.25.1
|
||||
# via gradio
|
||||
pyee==12.1.1
|
||||
# via aiortc
|
||||
pygments==2.18.0
|
||||
# via rich
|
||||
pyjwt==2.10.1
|
||||
# via twilio
|
||||
pylibsrtp==0.10.0
|
||||
# via aiortc
|
||||
pyopenssl==24.2.1
|
||||
# via aiortc
|
||||
python-dateutil==2.9.0.post0
|
||||
# via pandas
|
||||
python-dotenv==1.0.1
|
||||
# via -r demo/llama_code_editor/requirements.in
|
||||
python-multipart==0.0.20
|
||||
# via gradio
|
||||
pytz==2024.2
|
||||
# via pandas
|
||||
pyyaml==6.0.2
|
||||
# via
|
||||
# gradio
|
||||
# huggingface-hub
|
||||
requests==2.32.3
|
||||
# via
|
||||
# huggingface-hub
|
||||
# pooch
|
||||
# twilio
|
||||
rich==13.9.4
|
||||
# via typer
|
||||
ruff==0.9.6
|
||||
# via gradio
|
||||
safehttpx==0.1.6
|
||||
# via gradio
|
||||
scikit-learn==1.5.2
|
||||
# via librosa
|
||||
scipy==1.14.1
|
||||
# via
|
||||
# librosa
|
||||
# scikit-learn
|
||||
semantic-version==2.10.0
|
||||
# via gradio
|
||||
shellingham==1.5.4
|
||||
# via typer
|
||||
six==1.16.0
|
||||
# via python-dateutil
|
||||
sniffio==1.3.1
|
||||
# via
|
||||
# anyio
|
||||
# groq
|
||||
# httpx
|
||||
# openai
|
||||
soundfile==0.12.1
|
||||
# via librosa
|
||||
soxr==0.5.0.post1
|
||||
# via librosa
|
||||
starlette==0.41.3
|
||||
# via
|
||||
# fastapi
|
||||
# gradio
|
||||
sympy==1.13.3
|
||||
# via onnxruntime
|
||||
threadpoolctl==3.5.0
|
||||
# via scikit-learn
|
||||
tomlkit==0.12.0
|
||||
# via gradio
|
||||
tqdm==4.67.0
|
||||
# via
|
||||
# huggingface-hub
|
||||
# openai
|
||||
twilio==9.4.5
|
||||
# via -r demo/llama_code_editor/requirements.in
|
||||
typer==0.13.1
|
||||
# via gradio
|
||||
typing-extensions==4.12.2
|
||||
# via
|
||||
# fastapi
|
||||
# gradio
|
||||
# gradio-client
|
||||
# groq
|
||||
# huggingface-hub
|
||||
# librosa
|
||||
# openai
|
||||
# pydantic
|
||||
# pydantic-core
|
||||
# pyee
|
||||
# typer
|
||||
tzdata==2024.2
|
||||
# via pandas
|
||||
urllib3==2.2.3
|
||||
# via requests
|
||||
uvicorn==0.32.0
|
||||
# via gradio
|
||||
websockets==12.0
|
||||
# via gradio-client
|
||||
yarl==1.18.3
|
||||
# via aiohttp
|
||||
75
demo/llama_code_editor/ui.py
Normal file
75
demo/llama_code_editor/ui.py
Normal file
@@ -0,0 +1,75 @@
|
||||
from pathlib import Path
|
||||
|
||||
import gradio as gr
|
||||
from dotenv import load_dotenv
|
||||
from fastrtc import WebRTC, get_twilio_turn_credentials
|
||||
from gradio.utils import get_space
|
||||
|
||||
try:
|
||||
from demo.llama_code_editor.handler import (
|
||||
CodeHandler,
|
||||
display_in_sandbox,
|
||||
system_prompt,
|
||||
)
|
||||
except (ImportError, ModuleNotFoundError):
|
||||
from handler import CodeHandler, display_in_sandbox, system_prompt
|
||||
|
||||
load_dotenv()
|
||||
|
||||
path = Path(__file__).parent / "assets"
|
||||
|
||||
with gr.Blocks(css=".code-component {max-height: 500px !important}") as demo:
|
||||
history = gr.State([{"role": "system", "content": system_prompt}])
|
||||
with gr.Row():
|
||||
with gr.Column(scale=1):
|
||||
gr.HTML(
|
||||
"""
|
||||
<h1 style='text-align: center'>
|
||||
Llama Code Editor
|
||||
</h1>
|
||||
<h2 style='text-align: center'>
|
||||
Powered by SambaNova and Gradio-WebRTC ⚡️
|
||||
</h2>
|
||||
<p style='text-align: center'>
|
||||
Create and edit single-file HTML applications with just your voice!
|
||||
</p>
|
||||
<p style='text-align: center'>
|
||||
Each conversation is limited to 90 seconds. Once the time limit is up you can rejoin the conversation.
|
||||
</p>
|
||||
"""
|
||||
)
|
||||
webrtc = WebRTC(
|
||||
rtc_configuration=get_twilio_turn_credentials()
|
||||
if get_space()
|
||||
else None,
|
||||
mode="send",
|
||||
modality="audio",
|
||||
)
|
||||
with gr.Column(scale=10):
|
||||
with gr.Tabs():
|
||||
with gr.Tab("Sandbox"):
|
||||
sandbox = gr.HTML(value=open(path / "sandbox.html").read())
|
||||
with gr.Tab("Code"):
|
||||
code = gr.Code(
|
||||
language="html",
|
||||
max_lines=50,
|
||||
interactive=False,
|
||||
elem_classes="code-component",
|
||||
)
|
||||
with gr.Tab("Chat"):
|
||||
cb = gr.Chatbot(type="messages")
|
||||
|
||||
webrtc.stream(
|
||||
CodeHandler,
|
||||
inputs=[webrtc, history, code],
|
||||
outputs=[webrtc],
|
||||
time_limit=90 if get_space() else None,
|
||||
concurrency_limit=10 if get_space() else None,
|
||||
)
|
||||
webrtc.on_additional_outputs(
|
||||
lambda history, code: (history, code, history), outputs=[history, code, cb]
|
||||
)
|
||||
code.change(display_in_sandbox, code, sandbox, queue=False)
|
||||
|
||||
if __name__ == "__main__":
|
||||
demo.launch()
|
||||
15
demo/llm_voice_chat/README.md
Normal file
15
demo/llm_voice_chat/README.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: LLM Voice Chat
|
||||
emoji: 💻
|
||||
colorFrom: purple
|
||||
colorTo: red
|
||||
sdk: gradio
|
||||
sdk_version: 5.16.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: Talk to an LLM with ElevenLabs
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN, secret|GROQ_API_KEY, secret|ELEVENLABS_API_KEY]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
15
demo/llm_voice_chat/README_gradio.md
Normal file
15
demo/llm_voice_chat/README_gradio.md
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: LLM Voice Chat (Gradio)
|
||||
emoji: 💻
|
||||
colorFrom: purple
|
||||
colorTo: red
|
||||
sdk: gradio
|
||||
sdk_version: 5.16.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: LLM Voice by ElevenLabs (Gradio)
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN, secret|GROQ_API_KEY, secret|ELEVENLABS_API_KEY]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
97
demo/llm_voice_chat/app.py
Normal file
97
demo/llm_voice_chat/app.py
Normal file
@@ -0,0 +1,97 @@
|
||||
import os
|
||||
import time
|
||||
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from dotenv import load_dotenv
|
||||
from elevenlabs import ElevenLabs
|
||||
from fastapi import FastAPI
|
||||
from fastrtc import (
|
||||
AdditionalOutputs,
|
||||
ReplyOnPause,
|
||||
Stream,
|
||||
get_stt_model,
|
||||
get_twilio_turn_credentials,
|
||||
)
|
||||
from gradio.utils import get_space
|
||||
from groq import Groq
|
||||
from numpy.typing import NDArray
|
||||
|
||||
load_dotenv()
|
||||
groq_client = Groq()
|
||||
tts_client = ElevenLabs(api_key=os.getenv("ELEVENLABS_API_KEY"))
|
||||
stt_model = get_stt_model()
|
||||
|
||||
|
||||
# See "Talk to Claude" in Cookbook for an example of how to keep
|
||||
# track of the chat history.
|
||||
def response(
|
||||
audio: tuple[int, NDArray[np.int16 | np.float32]],
|
||||
chatbot: list[dict] | None = None,
|
||||
):
|
||||
chatbot = chatbot or []
|
||||
messages = [{"role": d["role"], "content": d["content"]} for d in chatbot]
|
||||
start = time.time()
|
||||
text = stt_model.stt(audio)
|
||||
print("transcription", time.time() - start)
|
||||
print("prompt", text)
|
||||
chatbot.append({"role": "user", "content": text})
|
||||
yield AdditionalOutputs(chatbot)
|
||||
messages.append({"role": "user", "content": text})
|
||||
response_text = (
|
||||
groq_client.chat.completions.create(
|
||||
model="llama-3.1-8b-instant",
|
||||
max_tokens=200,
|
||||
messages=messages, # type: ignore
|
||||
)
|
||||
.choices[0]
|
||||
.message.content
|
||||
)
|
||||
|
||||
chatbot.append({"role": "assistant", "content": response_text})
|
||||
|
||||
for i, chunk in enumerate(
|
||||
tts_client.text_to_speech.convert_as_stream(
|
||||
text=response_text, # type: ignore
|
||||
voice_id="JBFqnCBsd6RMkjVDRZzb",
|
||||
model_id="eleven_multilingual_v2",
|
||||
output_format="pcm_24000",
|
||||
)
|
||||
):
|
||||
if i == 0:
|
||||
yield AdditionalOutputs(chatbot)
|
||||
audio_array = np.frombuffer(chunk, dtype=np.int16).reshape(1, -1)
|
||||
yield (24000, audio_array)
|
||||
|
||||
|
||||
chatbot = gr.Chatbot(type="messages")
|
||||
stream = Stream(
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
handler=ReplyOnPause(response, input_sample_rate=16000),
|
||||
additional_outputs_handler=lambda a, b: b,
|
||||
additional_inputs=[chatbot],
|
||||
additional_outputs=[chatbot],
|
||||
rtc_configuration=get_twilio_turn_credentials() if get_space() else None,
|
||||
concurrency_limit=5 if get_space() else None,
|
||||
time_limit=90 if get_space() else None,
|
||||
ui_args={"title": "LLM Voice Chat (Powered by Groq, ElevenLabs, and WebRTC ⚡️)"},
|
||||
)
|
||||
|
||||
# Mount the STREAM UI to the FastAPI app
|
||||
# Because I don't want to build the UI manually
|
||||
app = FastAPI()
|
||||
app = gr.mount_gradio_app(app, stream.ui, path="/")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import os
|
||||
|
||||
os.environ["GRADIO_SSR_MODE"] = "false"
|
||||
|
||||
if (mode := os.getenv("MODE")) == "UI":
|
||||
stream.ui.launch(server_port=7860)
|
||||
elif mode == "PHONE":
|
||||
stream.fastphone(host="0.0.0.0", port=7860)
|
||||
else:
|
||||
stream.ui.launch(server_port=7860)
|
||||
6
demo/llm_voice_chat/requirements.txt
Normal file
6
demo/llm_voice_chat/requirements.txt
Normal file
@@ -0,0 +1,6 @@
|
||||
fastrtc[stopword]
|
||||
python-dotenv
|
||||
openai
|
||||
twilio
|
||||
groq
|
||||
elevenlabs
|
||||
16
demo/moonshine_live/README.md
Normal file
16
demo/moonshine_live/README.md
Normal file
@@ -0,0 +1,16 @@
|
||||
---
|
||||
title: Moonshine Live Transcription
|
||||
emoji: 🌕
|
||||
colorFrom: purple
|
||||
colorTo: red
|
||||
sdk: gradio
|
||||
sdk_version: 5.17.0
|
||||
app_file: app.py
|
||||
pinned: false
|
||||
license: mit
|
||||
short_description: Real-time captions with Moonshine ONNX
|
||||
tags: [webrtc, websocket, gradio, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_ACCOUNT_SID, secret|TWILIO_AUTH_TOKEN]
|
||||
models: [onnx-community/moonshine-base-ONNX, UsefulSensors/moonshine-base]
|
||||
---
|
||||
|
||||
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
||||
73
demo/moonshine_live/app.py
Normal file
73
demo/moonshine_live/app.py
Normal file
@@ -0,0 +1,73 @@
|
||||
from functools import lru_cache
|
||||
from typing import Generator, Literal
|
||||
|
||||
import gradio as gr
|
||||
import numpy as np
|
||||
from dotenv import load_dotenv
|
||||
from fastrtc import (
|
||||
AdditionalOutputs,
|
||||
ReplyOnPause,
|
||||
Stream,
|
||||
audio_to_float32,
|
||||
get_twilio_turn_credentials,
|
||||
)
|
||||
from moonshine_onnx import MoonshineOnnxModel, load_tokenizer
|
||||
from numpy.typing import NDArray
|
||||
|
||||
load_dotenv()
|
||||
|
||||
|
||||
@lru_cache(maxsize=None)
|
||||
def load_moonshine(
|
||||
model_name: Literal["moonshine/base", "moonshine/tiny"],
|
||||
) -> MoonshineOnnxModel:
|
||||
return MoonshineOnnxModel(model_name=model_name)
|
||||
|
||||
|
||||
tokenizer = load_tokenizer()
|
||||
|
||||
|
||||
def stt(
|
||||
audio: tuple[int, NDArray[np.int16 | np.float32]],
|
||||
model_name: Literal["moonshine/base", "moonshine/tiny"],
|
||||
captions: str,
|
||||
) -> Generator[AdditionalOutputs, None, None]:
|
||||
moonshine = load_moonshine(model_name)
|
||||
sr, audio_np = audio # type: ignore
|
||||
if audio_np.dtype == np.int16:
|
||||
audio_np = audio_to_float32(audio)
|
||||
if audio_np.ndim == 1:
|
||||
audio_np = audio_np.reshape(1, -1)
|
||||
tokens = moonshine.generate(audio_np)
|
||||
yield AdditionalOutputs(
|
||||
(captions + "\n" + tokenizer.decode_batch(tokens)[0]).strip()
|
||||
)
|
||||
|
||||
|
||||
captions = gr.Textbox(label="Captions")
|
||||
stream = Stream(
|
||||
ReplyOnPause(stt, input_sample_rate=16000),
|
||||
modality="audio",
|
||||
mode="send",
|
||||
ui_args={
|
||||
"title": "Live Captions by Moonshine",
|
||||
"icon": "default-favicon.ico",
|
||||
"icon_button_color": "#5c5c5c",
|
||||
"pulse_color": "#a7c6fc",
|
||||
"icon_radius": 0,
|
||||
},
|
||||
rtc_configuration=get_twilio_turn_credentials(),
|
||||
additional_inputs=[
|
||||
gr.Radio(
|
||||
choices=["moonshine/base", "moonshine/tiny"],
|
||||
value="moonshine/base",
|
||||
label="Model",
|
||||
),
|
||||
captions,
|
||||
],
|
||||
additional_outputs=[captions],
|
||||
additional_outputs_handler=lambda prev, current: (prev + "\n" + current).strip(),
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
stream.ui.launch()
|
||||
BIN
demo/moonshine_live/default-favicon.ico
Normal file
BIN
demo/moonshine_live/default-favicon.ico
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 6.4 KiB |
3
demo/moonshine_live/requirements.txt
Normal file
3
demo/moonshine_live/requirements.txt
Normal file
@@ -0,0 +1,3 @@
|
||||
fastrtc[vad]
|
||||
useful-moonshine-onnx@git+https://git@github.com/usefulsensors/moonshine.git#subdirectory=moonshine-onnx
|
||||
twilio
|
||||
74
demo/nextjs_voice_chat/README.md
Normal file
74
demo/nextjs_voice_chat/README.md
Normal file
@@ -0,0 +1,74 @@
|
||||
# FastRTC POC
|
||||
A simple POC for a fast real-time voice chat application using FastAPI and FastRTC by [rohanprichard](https://github.com/rohanprichard). I wanted to make one as an example with more production-ready languages, rather than just Gradio.
|
||||
|
||||
## Setup
|
||||
1. Set your API keys in an `.env` file based on the `.env.example` file
|
||||
2. Create a virtual environment and install the dependencies
|
||||
```bash
|
||||
python3 -m venv env
|
||||
source env/bin/activate
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
3. Run the server
|
||||
```bash
|
||||
./run.sh
|
||||
```
|
||||
4. Navigate into the frontend directory in another terminal
|
||||
```bash
|
||||
cd frontend/fastrtc-demo
|
||||
```
|
||||
5. Run the frontend
|
||||
```bash
|
||||
npm install
|
||||
npm run dev
|
||||
```
|
||||
6. Go to the URL and click the microphone icon to start chatting!
|
||||
|
||||
7. Reset chats by clicking the trash button on the bottom right
|
||||
|
||||
## Notes
|
||||
You can choose to not install the requirements for TTS and STT by removing the `[tts, stt]` from the specifier in the `requirements.txt` file.
|
||||
|
||||
- The STT is currently using the ElevenLabs API.
|
||||
- The LLM is currently using the OpenAI API.
|
||||
- The TTS is currently using the ElevenLabs API.
|
||||
- The VAD is currently using the Silero VAD model.
|
||||
- You may need to install ffmpeg if you get errors in STT
|
||||
|
||||
The prompt can be changed in the `backend/server.py` file and modified as you like.
|
||||
|
||||
### Audio Parameters
|
||||
|
||||
#### AlgoOptions
|
||||
|
||||
- **audio_chunk_duration**: Length of audio chunks in seconds. Smaller values allow for faster processing but may be less accurate.
|
||||
- **started_talking_threshold**: If a chunk has more than this many seconds of speech, the system considers that the user has started talking.
|
||||
- **speech_threshold**: After the user has started speaking, if a chunk has less than this many seconds of speech, the system considers that the user has paused.
|
||||
|
||||
#### SileroVadOptions
|
||||
|
||||
- **threshold**: Speech probability threshold (0.0-1.0). Values above this are considered speech. Higher values are more strict.
|
||||
- **min_speech_duration_ms**: Speech segments shorter than this (in milliseconds) are filtered out.
|
||||
- **min_silence_duration_ms**: The system waits for this duration of silence (in milliseconds) before considering speech to be finished.
|
||||
- **speech_pad_ms**: Padding added to both ends of detected speech segments to prevent cutting off words.
|
||||
- **max_speech_duration_s**: Maximum allowed duration for a speech segment in seconds. Prevents indefinite listening.
|
||||
|
||||
### Tuning Recommendations
|
||||
|
||||
- If the AI interrupts you too early:
|
||||
- Increase `min_silence_duration_ms`
|
||||
- Increase `speech_threshold`
|
||||
- Increase `speech_pad_ms`
|
||||
|
||||
- If the AI is slow to respond after you finish speaking:
|
||||
- Decrease `min_silence_duration_ms`
|
||||
- Decrease `speech_threshold`
|
||||
|
||||
- If the system fails to detect some speech:
|
||||
- Lower the `threshold` value
|
||||
- Decrease `started_talking_threshold`
|
||||
|
||||
|
||||
## Credits:
|
||||
Credit for the UI components goes to Shadcn, Aceternity UI and Kokonut UI.
|
||||
7
demo/nextjs_voice_chat/backend/env.py
Normal file
7
demo/nextjs_voice_chat/backend/env.py
Normal file
@@ -0,0 +1,7 @@
|
||||
from dotenv import load_dotenv
|
||||
import os
|
||||
|
||||
load_dotenv()
|
||||
|
||||
LLM_API_KEY = os.getenv("LLM_API_KEY")
|
||||
ELEVENLABS_API_KEY = os.getenv("ELEVENLABS_API_KEY")
|
||||
129
demo/nextjs_voice_chat/backend/server.py
Normal file
129
demo/nextjs_voice_chat/backend/server.py
Normal file
@@ -0,0 +1,129 @@
|
||||
import fastapi
|
||||
from fastrtc import ReplyOnPause, Stream, AlgoOptions, SileroVadOptions
|
||||
from fastrtc.utils import audio_to_bytes
|
||||
from openai import OpenAI
|
||||
import logging
|
||||
import time
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from elevenlabs import VoiceSettings, stream
|
||||
from elevenlabs.client import ElevenLabs
|
||||
import numpy as np
|
||||
|
||||
from .env import LLM_API_KEY, ELEVENLABS_API_KEY
|
||||
|
||||
|
||||
sys_prompt = """
|
||||
You are a helpful assistant. You are witty, engaging and fun. You love being interactive with the user.
|
||||
You also can add minimalistic utterances like 'uh-huh' or 'mm-hmm' to the conversation to make it more natural. However, only vocalization are allowed, no actions or other non-vocal sounds.
|
||||
Begin a conversation with a self-deprecating joke like 'I'm not sure if I'm ready for this...' or 'I bet you already regret clicking that button...'
|
||||
"""
|
||||
|
||||
messages = [{"role": "system", "content": sys_prompt}]
|
||||
|
||||
openai_client = OpenAI(api_key=LLM_API_KEY)
|
||||
|
||||
elevenlabs_client = ElevenLabs(api_key=ELEVENLABS_API_KEY)
|
||||
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
|
||||
def echo(audio):
|
||||
stt_time = time.time()
|
||||
|
||||
logging.info("Performing STT")
|
||||
|
||||
transcription = elevenlabs_client.speech_to_text.convert(
|
||||
file=audio_to_bytes(audio),
|
||||
model_id="scribe_v1",
|
||||
tag_audio_events=False,
|
||||
language_code="eng",
|
||||
diarize=False,
|
||||
)
|
||||
prompt = transcription.text
|
||||
if prompt == "":
|
||||
logging.info("STT returned empty string")
|
||||
return
|
||||
logging.info(f"STT response: {prompt}")
|
||||
|
||||
messages.append({"role": "user", "content": prompt})
|
||||
|
||||
logging.info(f"STT took {time.time() - stt_time} seconds")
|
||||
|
||||
llm_time = time.time()
|
||||
|
||||
def text_stream():
|
||||
global full_response
|
||||
full_response = ""
|
||||
|
||||
response = openai_client.chat.completions.create(
|
||||
model="gpt-3.5-turbo", messages=messages, max_tokens=200, stream=True
|
||||
)
|
||||
|
||||
for chunk in response:
|
||||
if chunk.choices[0].finish_reason == "stop":
|
||||
break
|
||||
if chunk.choices[0].delta.content:
|
||||
full_response += chunk.choices[0].delta.content
|
||||
yield chunk.choices[0].delta.content
|
||||
|
||||
audio_stream = elevenlabs_client.generate(
|
||||
text=text_stream(),
|
||||
voice="Rachel", # Cassidy is also really good
|
||||
voice_settings=VoiceSettings(
|
||||
similarity_boost=0.9, stability=0.6, style=0.4, speed=1
|
||||
),
|
||||
model="eleven_multilingual_v2",
|
||||
output_format="pcm_24000",
|
||||
stream=True,
|
||||
)
|
||||
|
||||
for audio_chunk in audio_stream:
|
||||
audio_array = (
|
||||
np.frombuffer(audio_chunk, dtype=np.int16).astype(np.float32) / 32768.0
|
||||
)
|
||||
yield (24000, audio_array)
|
||||
|
||||
messages.append({"role": "assistant", "content": full_response + " "})
|
||||
logging.info(f"LLM response: {full_response}")
|
||||
logging.info(f"LLM took {time.time() - llm_time} seconds")
|
||||
|
||||
|
||||
stream = Stream(
|
||||
ReplyOnPause(
|
||||
echo,
|
||||
algo_options=AlgoOptions(
|
||||
audio_chunk_duration=0.5,
|
||||
started_talking_threshold=0.1,
|
||||
speech_threshold=0.03,
|
||||
),
|
||||
model_options=SileroVadOptions(
|
||||
threshold=0.75,
|
||||
min_speech_duration_ms=250,
|
||||
min_silence_duration_ms=1500,
|
||||
speech_pad_ms=400,
|
||||
max_speech_duration_s=15,
|
||||
),
|
||||
),
|
||||
modality="audio",
|
||||
mode="send-receive",
|
||||
)
|
||||
|
||||
app = fastapi.FastAPI()
|
||||
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=["*"],
|
||||
allow_credentials=True,
|
||||
allow_methods=["*"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
stream.mount(app)
|
||||
|
||||
|
||||
@app.get("/reset")
|
||||
async def reset():
|
||||
global messages
|
||||
logging.info("Resetting chat")
|
||||
messages = [{"role": "system", "content": sys_prompt}]
|
||||
return {"status": "success"}
|
||||
41
demo/nextjs_voice_chat/frontend/fastrtc-demo/.gitignore
vendored
Normal file
41
demo/nextjs_voice_chat/frontend/fastrtc-demo/.gitignore
vendored
Normal file
@@ -0,0 +1,41 @@
|
||||
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.
|
||||
|
||||
# dependencies
|
||||
/node_modules
|
||||
/.pnp
|
||||
.pnp.*
|
||||
.yarn/*
|
||||
!.yarn/patches
|
||||
!.yarn/plugins
|
||||
!.yarn/releases
|
||||
!.yarn/versions
|
||||
|
||||
# testing
|
||||
/coverage
|
||||
|
||||
# next.js
|
||||
/.next/
|
||||
/out/
|
||||
|
||||
# production
|
||||
/build
|
||||
|
||||
# misc
|
||||
.DS_Store
|
||||
*.pem
|
||||
|
||||
# debug
|
||||
npm-debug.log*
|
||||
yarn-debug.log*
|
||||
yarn-error.log*
|
||||
.pnpm-debug.log*
|
||||
|
||||
# env files (can opt-in for committing if needed)
|
||||
.env*
|
||||
|
||||
# vercel
|
||||
.vercel
|
||||
|
||||
# typescript
|
||||
*.tsbuildinfo
|
||||
next-env.d.ts
|
||||
36
demo/nextjs_voice_chat/frontend/fastrtc-demo/README.md
Normal file
36
demo/nextjs_voice_chat/frontend/fastrtc-demo/README.md
Normal file
@@ -0,0 +1,36 @@
|
||||
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/app/api-reference/cli/create-next-app).
|
||||
|
||||
## Getting Started
|
||||
|
||||
First, run the development server:
|
||||
|
||||
```bash
|
||||
npm run dev
|
||||
# or
|
||||
yarn dev
|
||||
# or
|
||||
pnpm dev
|
||||
# or
|
||||
bun dev
|
||||
```
|
||||
|
||||
Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.
|
||||
|
||||
You can start editing the page by modifying `app/page.tsx`. The page auto-updates as you edit the file.
|
||||
|
||||
This project uses [`next/font`](https://nextjs.org/docs/app/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.
|
||||
|
||||
## Learn More
|
||||
|
||||
To learn more about Next.js, take a look at the following resources:
|
||||
|
||||
- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
|
||||
- [Learn Next.js](https://nextjs.org/learn) - an interactive Next.js tutorial.
|
||||
|
||||
You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!
|
||||
|
||||
## Deploy on Vercel
|
||||
|
||||
The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.
|
||||
|
||||
Check out our [Next.js deployment documentation](https://nextjs.org/docs/app/building-your-application/deploying) for more details.
|
||||
BIN
demo/nextjs_voice_chat/frontend/fastrtc-demo/app/favicon.ico
Normal file
BIN
demo/nextjs_voice_chat/frontend/fastrtc-demo/app/favicon.ico
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 25 KiB |
130
demo/nextjs_voice_chat/frontend/fastrtc-demo/app/globals.css
Normal file
130
demo/nextjs_voice_chat/frontend/fastrtc-demo/app/globals.css
Normal file
@@ -0,0 +1,130 @@
|
||||
@import "tailwindcss";
|
||||
|
||||
@plugin "tailwindcss-animate";
|
||||
|
||||
@custom-variant dark (&:is(.dark *));
|
||||
|
||||
@theme inline {
|
||||
--color-background: var(--background);
|
||||
--color-foreground: var(--foreground);
|
||||
--font-sans: var(--font-geist-sans);
|
||||
--font-mono: var(--font-geist-mono);
|
||||
--color-sidebar-ring: var(--sidebar-ring);
|
||||
--color-sidebar-border: var(--sidebar-border);
|
||||
--color-sidebar-accent-foreground: var(--sidebar-accent-foreground);
|
||||
--color-sidebar-accent: var(--sidebar-accent);
|
||||
--color-sidebar-primary-foreground: var(--sidebar-primary-foreground);
|
||||
--color-sidebar-primary: var(--sidebar-primary);
|
||||
--color-sidebar-foreground: var(--sidebar-foreground);
|
||||
--color-sidebar: var(--sidebar);
|
||||
--color-chart-5: var(--chart-5);
|
||||
--color-chart-4: var(--chart-4);
|
||||
--color-chart-3: var(--chart-3);
|
||||
--color-chart-2: var(--chart-2);
|
||||
--color-chart-1: var(--chart-1);
|
||||
--color-ring: var(--ring);
|
||||
--color-input: var(--input);
|
||||
--color-border: var(--border);
|
||||
--color-destructive-foreground: var(--destructive-foreground);
|
||||
--color-destructive: var(--destructive);
|
||||
--color-accent-foreground: var(--accent-foreground);
|
||||
--color-accent: var(--accent);
|
||||
--color-muted-foreground: var(--muted-foreground);
|
||||
--color-muted: var(--muted);
|
||||
--color-secondary-foreground: var(--secondary-foreground);
|
||||
--color-secondary: var(--secondary);
|
||||
--color-primary-foreground: var(--primary-foreground);
|
||||
--color-primary: var(--primary);
|
||||
--color-popover-foreground: var(--popover-foreground);
|
||||
--color-popover: var(--popover);
|
||||
--color-card-foreground: var(--card-foreground);
|
||||
--color-card: var(--card);
|
||||
--radius-sm: calc(var(--radius) - 4px);
|
||||
--radius-md: calc(var(--radius) - 2px);
|
||||
--radius-lg: var(--radius);
|
||||
--radius-xl: calc(var(--radius) + 4px);
|
||||
}
|
||||
|
||||
:root {
|
||||
--background: oklch(1 0 0);
|
||||
--foreground: oklch(0.129 0.042 264.695);
|
||||
--card: oklch(1 0 0);
|
||||
--card-foreground: oklch(0.129 0.042 264.695);
|
||||
--popover: oklch(1 0 0);
|
||||
--popover-foreground: oklch(0.129 0.042 264.695);
|
||||
--primary: oklch(0.208 0.042 265.755);
|
||||
--primary-foreground: oklch(0.984 0.003 247.858);
|
||||
--secondary: oklch(0.968 0.007 247.896);
|
||||
--secondary-foreground: oklch(0.208 0.042 265.755);
|
||||
--muted: oklch(0.968 0.007 247.896);
|
||||
--muted-foreground: oklch(0.554 0.046 257.417);
|
||||
--accent: oklch(0.968 0.007 247.896);
|
||||
--accent-foreground: oklch(0.208 0.042 265.755);
|
||||
--destructive: oklch(0.577 0.245 27.325);
|
||||
--destructive-foreground: oklch(0.577 0.245 27.325);
|
||||
--border: oklch(0.929 0.013 255.508);
|
||||
--input: oklch(0.929 0.013 255.508);
|
||||
--ring: oklch(0.704 0.04 256.788);
|
||||
--chart-1: oklch(0.646 0.222 41.116);
|
||||
--chart-2: oklch(0.6 0.118 184.704);
|
||||
--chart-3: oklch(0.398 0.07 227.392);
|
||||
--chart-4: oklch(0.828 0.189 84.429);
|
||||
--chart-5: oklch(0.769 0.188 70.08);
|
||||
--radius: 0.625rem;
|
||||
--sidebar: oklch(0.984 0.003 247.858);
|
||||
--sidebar-foreground: oklch(0.129 0.042 264.695);
|
||||
--sidebar-primary: oklch(0.208 0.042 265.755);
|
||||
--sidebar-primary-foreground: oklch(0.984 0.003 247.858);
|
||||
--sidebar-accent: oklch(0.968 0.007 247.896);
|
||||
--sidebar-accent-foreground: oklch(0.208 0.042 265.755);
|
||||
--sidebar-border: oklch(0.929 0.013 255.508);
|
||||
--sidebar-ring: oklch(0.704 0.04 256.788);
|
||||
}
|
||||
|
||||
.dark {
|
||||
--background: oklch(0.129 0.042 264.695);
|
||||
--foreground: oklch(0.984 0.003 247.858);
|
||||
--card: oklch(0.129 0.042 264.695);
|
||||
--card-foreground: oklch(0.984 0.003 247.858);
|
||||
--popover: oklch(0.129 0.042 264.695);
|
||||
--popover-foreground: oklch(0.984 0.003 247.858);
|
||||
--primary: oklch(0.984 0.003 247.858);
|
||||
--primary-foreground: oklch(0.208 0.042 265.755);
|
||||
--secondary: oklch(0.279 0.041 260.031);
|
||||
--secondary-foreground: oklch(0.984 0.003 247.858);
|
||||
--muted: oklch(0.279 0.041 260.031);
|
||||
--muted-foreground: oklch(0.704 0.04 256.788);
|
||||
--accent: oklch(0.279 0.041 260.031);
|
||||
--accent-foreground: oklch(0.984 0.003 247.858);
|
||||
--destructive: oklch(0.396 0.141 25.723);
|
||||
--destructive-foreground: oklch(0.637 0.237 25.331);
|
||||
--border: oklch(0.279 0.041 260.031);
|
||||
--input: oklch(0.279 0.041 260.031);
|
||||
--ring: oklch(0.446 0.043 257.281);
|
||||
--chart-1: oklch(0.488 0.243 264.376);
|
||||
--chart-2: oklch(0.696 0.17 162.48);
|
||||
--chart-3: oklch(0.769 0.188 70.08);
|
||||
--chart-4: oklch(0.627 0.265 303.9);
|
||||
--chart-5: oklch(0.645 0.246 16.439);
|
||||
--sidebar: oklch(0.208 0.042 265.755);
|
||||
--sidebar-foreground: oklch(0.984 0.003 247.858);
|
||||
--sidebar-primary: oklch(0.488 0.243 264.376);
|
||||
--sidebar-primary-foreground: oklch(0.984 0.003 247.858);
|
||||
--sidebar-accent: oklch(0.279 0.041 260.031);
|
||||
--sidebar-accent-foreground: oklch(0.984 0.003 247.858);
|
||||
--sidebar-border: oklch(0.279 0.041 260.031);
|
||||
--sidebar-ring: oklch(0.446 0.043 257.281);
|
||||
}
|
||||
|
||||
@layer base {
|
||||
* {
|
||||
@apply border-border outline-ring/50;
|
||||
}
|
||||
body {
|
||||
@apply bg-background text-foreground;
|
||||
}
|
||||
}
|
||||
|
||||
.no-transitions * {
|
||||
transition: none !important;
|
||||
}
|
||||
44
demo/nextjs_voice_chat/frontend/fastrtc-demo/app/layout.tsx
Normal file
44
demo/nextjs_voice_chat/frontend/fastrtc-demo/app/layout.tsx
Normal file
@@ -0,0 +1,44 @@
|
||||
import type { Metadata } from "next";
|
||||
import { Geist, Geist_Mono } from "next/font/google";
|
||||
import "./globals.css";
|
||||
import { ThemeProvider } from "@/components/theme-provider";
|
||||
import { ThemeTransition } from "@/components/ui/theme-transition";
|
||||
|
||||
const geistSans = Geist({
|
||||
variable: "--font-geist-sans",
|
||||
subsets: ["latin"],
|
||||
});
|
||||
|
||||
const geistMono = Geist_Mono({
|
||||
variable: "--font-geist-mono",
|
||||
subsets: ["latin"],
|
||||
});
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: "FastRTC Demo",
|
||||
description: "Interactive WebRTC demo with audio visualization",
|
||||
};
|
||||
|
||||
export default function RootLayout({
|
||||
children,
|
||||
}: Readonly<{
|
||||
children: React.ReactNode;
|
||||
}>) {
|
||||
return (
|
||||
<html lang="en" suppressHydrationWarning>
|
||||
<body
|
||||
className={`${geistSans.variable} ${geistMono.variable} antialiased`}
|
||||
>
|
||||
<ThemeProvider
|
||||
attribute="class"
|
||||
defaultTheme="dark"
|
||||
enableSystem
|
||||
disableTransitionOnChange
|
||||
>
|
||||
{children}
|
||||
<ThemeTransition />
|
||||
</ThemeProvider>
|
||||
</body>
|
||||
</html>
|
||||
);
|
||||
}
|
||||
16
demo/nextjs_voice_chat/frontend/fastrtc-demo/app/page.tsx
Normal file
16
demo/nextjs_voice_chat/frontend/fastrtc-demo/app/page.tsx
Normal file
@@ -0,0 +1,16 @@
|
||||
import { BackgroundCircleProvider } from "@/components/background-circle-provider";
|
||||
import { ThemeToggle } from "@/components/ui/theme-toggle";
|
||||
import { ResetChat } from "@/components/ui/reset-chat";
|
||||
export default function Home() {
|
||||
return (
|
||||
<div className="flex flex-col items-center justify-center h-screen">
|
||||
<BackgroundCircleProvider />
|
||||
<div className="absolute top-4 right-4 z-10">
|
||||
<ThemeToggle />
|
||||
</div>
|
||||
<div className="absolute bottom-4 right-4 z-10">
|
||||
<ResetChat />
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
21
demo/nextjs_voice_chat/frontend/fastrtc-demo/components.json
Normal file
21
demo/nextjs_voice_chat/frontend/fastrtc-demo/components.json
Normal file
@@ -0,0 +1,21 @@
|
||||
{
|
||||
"$schema": "https://ui.shadcn.com/schema.json",
|
||||
"style": "new-york",
|
||||
"rsc": true,
|
||||
"tsx": true,
|
||||
"tailwind": {
|
||||
"config": "",
|
||||
"css": "app/globals.css",
|
||||
"baseColor": "slate",
|
||||
"cssVariables": true,
|
||||
"prefix": ""
|
||||
},
|
||||
"aliases": {
|
||||
"components": "@/components",
|
||||
"utils": "@/lib/utils",
|
||||
"ui": "@/components/ui",
|
||||
"lib": "@/lib",
|
||||
"hooks": "@/hooks"
|
||||
},
|
||||
"iconLibrary": "lucide"
|
||||
}
|
||||
@@ -0,0 +1,123 @@
|
||||
"use client"
|
||||
|
||||
import { useState, useEffect, useRef, useCallback } from "react";
|
||||
import { BackgroundCircles } from "@/components/ui/background-circles";
|
||||
import { AIVoiceInput } from "@/components/ui/ai-voice-input";
|
||||
import { WebRTCClient } from "@/lib/webrtc-client";
|
||||
|
||||
export function BackgroundCircleProvider() {
|
||||
const [currentVariant, setCurrentVariant] =
|
||||
useState<keyof typeof COLOR_VARIANTS>("octonary");
|
||||
const [isConnected, setIsConnected] = useState(false);
|
||||
const [webrtcClient, setWebrtcClient] = useState<WebRTCClient | null>(null);
|
||||
const [audioLevel, setAudioLevel] = useState(0);
|
||||
const audioRef = useRef<HTMLAudioElement>(null);
|
||||
|
||||
// Memoize callbacks to prevent recreation on each render
|
||||
const handleConnected = useCallback(() => setIsConnected(true), []);
|
||||
const handleDisconnected = useCallback(() => setIsConnected(false), []);
|
||||
|
||||
const handleAudioStream = useCallback((stream: MediaStream) => {
|
||||
if (audioRef.current) {
|
||||
audioRef.current.srcObject = stream;
|
||||
}
|
||||
}, []);
|
||||
|
||||
const handleAudioLevel = useCallback((level: number) => {
|
||||
// Apply some smoothing to the audio level
|
||||
setAudioLevel(prev => prev * 0.7 + level * 0.3);
|
||||
}, []);
|
||||
|
||||
// Get all available variants
|
||||
const variants = Object.keys(
|
||||
COLOR_VARIANTS
|
||||
) as (keyof typeof COLOR_VARIANTS)[];
|
||||
|
||||
// Function to change to the next color variant
|
||||
const changeVariant = () => {
|
||||
const currentIndex = variants.indexOf(currentVariant);
|
||||
const nextVariant = variants[(currentIndex + 1) % variants.length];
|
||||
setCurrentVariant(nextVariant);
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
// Initialize WebRTC client with memoized callbacks
|
||||
const client = new WebRTCClient({
|
||||
onConnected: handleConnected,
|
||||
onDisconnected: handleDisconnected,
|
||||
onAudioStream: handleAudioStream,
|
||||
onAudioLevel: handleAudioLevel
|
||||
});
|
||||
setWebrtcClient(client);
|
||||
|
||||
return () => {
|
||||
client.disconnect();
|
||||
};
|
||||
}, [handleConnected, handleDisconnected, handleAudioStream, handleAudioLevel]);
|
||||
|
||||
const handleStart = () => {
|
||||
webrtcClient?.connect();
|
||||
};
|
||||
|
||||
const handleStop = () => {
|
||||
webrtcClient?.disconnect();
|
||||
};
|
||||
|
||||
return (
|
||||
<div
|
||||
className="relative w-full h-full"
|
||||
onClick={changeVariant} // Add click handler to change color
|
||||
>
|
||||
<BackgroundCircles
|
||||
variant={currentVariant}
|
||||
audioLevel={audioLevel}
|
||||
isActive={isConnected}
|
||||
/>
|
||||
<div className="absolute inset-0 flex items-center justify-center">
|
||||
<AIVoiceInput
|
||||
onStart={handleStart}
|
||||
onStop={handleStop}
|
||||
isConnected={isConnected}
|
||||
/>
|
||||
</div>
|
||||
<audio ref={audioRef} autoPlay hidden />
|
||||
</div>
|
||||
);
|
||||
}
|
||||
|
||||
export default { BackgroundCircleProvider }
|
||||
|
||||
const COLOR_VARIANTS = {
|
||||
primary: {
|
||||
border: [
|
||||
"border-emerald-500/60",
|
||||
"border-cyan-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-emerald-500/30",
|
||||
},
|
||||
secondary: {
|
||||
border: [
|
||||
"border-violet-500/60",
|
||||
"border-fuchsia-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-violet-500/30",
|
||||
},
|
||||
senary: {
|
||||
border: [
|
||||
"border-blue-500/60",
|
||||
"border-sky-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-blue-500/30",
|
||||
}, // blue
|
||||
octonary: {
|
||||
border: [
|
||||
"border-red-500/60",
|
||||
"border-rose-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-red-500/30",
|
||||
},
|
||||
} as const;
|
||||
@@ -0,0 +1,101 @@
|
||||
"use client";
|
||||
|
||||
import { createContext, useContext, useEffect, useState } from "react";
|
||||
|
||||
type Theme = "light" | "dark" | "system";
|
||||
|
||||
type ThemeProviderProps = {
|
||||
children: React.ReactNode;
|
||||
defaultTheme?: Theme;
|
||||
storageKey?: string;
|
||||
attribute?: string;
|
||||
enableSystem?: boolean;
|
||||
disableTransitionOnChange?: boolean;
|
||||
};
|
||||
|
||||
type ThemeProviderState = {
|
||||
theme: Theme;
|
||||
setTheme: (theme: Theme) => void;
|
||||
};
|
||||
|
||||
const initialState: ThemeProviderState = {
|
||||
theme: "system",
|
||||
setTheme: () => null,
|
||||
};
|
||||
|
||||
const ThemeProviderContext = createContext<ThemeProviderState>(initialState);
|
||||
|
||||
export function ThemeProvider({
|
||||
children,
|
||||
defaultTheme = "system",
|
||||
storageKey = "theme",
|
||||
attribute = "class",
|
||||
enableSystem = true,
|
||||
disableTransitionOnChange = false,
|
||||
...props
|
||||
}: ThemeProviderProps) {
|
||||
const [theme, setTheme] = useState<Theme>(defaultTheme);
|
||||
|
||||
useEffect(() => {
|
||||
const savedTheme = localStorage.getItem(storageKey) as Theme | null;
|
||||
|
||||
if (savedTheme) {
|
||||
setTheme(savedTheme);
|
||||
} else if (defaultTheme === "system" && enableSystem) {
|
||||
const systemTheme = window.matchMedia("(prefers-color-scheme: dark)").matches
|
||||
? "dark"
|
||||
: "light";
|
||||
setTheme(systemTheme);
|
||||
}
|
||||
}, [defaultTheme, storageKey, enableSystem]);
|
||||
|
||||
useEffect(() => {
|
||||
const root = window.document.documentElement;
|
||||
|
||||
if (disableTransitionOnChange) {
|
||||
root.classList.add("no-transitions");
|
||||
|
||||
// Force a reflow
|
||||
window.getComputedStyle(root).getPropertyValue("opacity");
|
||||
|
||||
setTimeout(() => {
|
||||
root.classList.remove("no-transitions");
|
||||
}, 0);
|
||||
}
|
||||
|
||||
root.classList.remove("light", "dark");
|
||||
|
||||
if (theme === "system" && enableSystem) {
|
||||
const systemTheme = window.matchMedia("(prefers-color-scheme: dark)").matches
|
||||
? "dark"
|
||||
: "light";
|
||||
root.classList.add(systemTheme);
|
||||
} else {
|
||||
root.classList.add(theme);
|
||||
}
|
||||
|
||||
localStorage.setItem(storageKey, theme);
|
||||
}, [theme, storageKey, enableSystem, disableTransitionOnChange]);
|
||||
|
||||
const value = {
|
||||
theme,
|
||||
setTheme: (theme: Theme) => {
|
||||
setTheme(theme);
|
||||
},
|
||||
};
|
||||
|
||||
return (
|
||||
<ThemeProviderContext.Provider {...props} value={value}>
|
||||
{children}
|
||||
</ThemeProviderContext.Provider>
|
||||
);
|
||||
}
|
||||
|
||||
export const useTheme = () => {
|
||||
const context = useContext(ThemeProviderContext);
|
||||
|
||||
if (context === undefined)
|
||||
throw new Error("useTheme must be used within a ThemeProvider");
|
||||
|
||||
return context;
|
||||
};
|
||||
@@ -0,0 +1,114 @@
|
||||
"use client";
|
||||
|
||||
import { Mic, Square } from "lucide-react";
|
||||
import { useState, useEffect } from "react";
|
||||
import { cn } from "@/lib/utils";
|
||||
|
||||
interface AIVoiceInputProps {
|
||||
onStart?: () => void;
|
||||
onStop?: (duration: number) => void;
|
||||
isConnected?: boolean;
|
||||
className?: string;
|
||||
}
|
||||
|
||||
export function AIVoiceInput({
|
||||
onStart,
|
||||
onStop,
|
||||
isConnected = false,
|
||||
className
|
||||
}: AIVoiceInputProps) {
|
||||
const [active, setActive] = useState(false);
|
||||
const [time, setTime] = useState(0);
|
||||
const [isClient, setIsClient] = useState(false);
|
||||
const [status, setStatus] = useState<'disconnected' | 'connecting' | 'connected'>('disconnected');
|
||||
|
||||
useEffect(() => {
|
||||
setIsClient(true);
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
let intervalId: NodeJS.Timeout;
|
||||
|
||||
if (active) {
|
||||
intervalId = setInterval(() => {
|
||||
setTime((t) => t + 1);
|
||||
}, 1000);
|
||||
} else {
|
||||
setTime(0);
|
||||
}
|
||||
|
||||
return () => clearInterval(intervalId);
|
||||
}, [active]);
|
||||
|
||||
useEffect(() => {
|
||||
if (isConnected) {
|
||||
setStatus('connected');
|
||||
setActive(true);
|
||||
} else {
|
||||
setStatus('disconnected');
|
||||
setActive(false);
|
||||
}
|
||||
}, [isConnected]);
|
||||
|
||||
const formatTime = (seconds: number) => {
|
||||
const mins = Math.floor(seconds / 60);
|
||||
const secs = seconds % 60;
|
||||
return `${mins.toString().padStart(2, "0")}:${secs.toString().padStart(2, "0")}`;
|
||||
};
|
||||
|
||||
const handleStart = () => {
|
||||
setStatus('connecting');
|
||||
onStart?.();
|
||||
};
|
||||
|
||||
const handleStop = () => {
|
||||
onStop?.(time);
|
||||
setStatus('disconnected');
|
||||
};
|
||||
|
||||
return (
|
||||
<div className={cn("w-full py-4", className)}>
|
||||
<div className="relative max-w-xl w-full mx-auto flex items-center flex-col gap-4">
|
||||
<div className={cn(
|
||||
"px-2 py-1 rounded-md text-xs font-medium bg-black/10 dark:bg-white/10 text-gray-700 dark:text-white"
|
||||
)}>
|
||||
{status === 'connected' ? 'Connected' : status === 'connecting' ? 'Connecting...' : 'Disconnected'}
|
||||
</div>
|
||||
|
||||
<button
|
||||
className={cn(
|
||||
"group w-16 h-16 rounded-xl flex items-center justify-center transition-colors",
|
||||
active
|
||||
? "bg-red-500/20 hover:bg-red-500/30"
|
||||
: "bg-black/10 hover:bg-black/20 dark:bg-white/10 dark:hover:bg-white/20"
|
||||
)}
|
||||
type="button"
|
||||
onClick={active ? handleStop : handleStart}
|
||||
disabled={status === 'connecting'}
|
||||
>
|
||||
{status === 'connecting' ? (
|
||||
<div
|
||||
className="w-6 h-6 rounded-sm animate-spin bg-black dark:bg-white cursor-pointer pointer-events-auto"
|
||||
style={{ animationDuration: "3s" }}
|
||||
/>
|
||||
) : active ? (
|
||||
<Square className="w-6 h-6 text-red-500" />
|
||||
) : (
|
||||
<Mic className="w-6 h-6 text-black/70 dark:text-white/70" />
|
||||
)}
|
||||
</button>
|
||||
|
||||
<span
|
||||
className={cn(
|
||||
"font-mono text-sm transition-opacity duration-300",
|
||||
active
|
||||
? "text-black/70 dark:text-white/70"
|
||||
: "text-black/30 dark:text-white/30"
|
||||
)}
|
||||
>
|
||||
{formatTime(time)}
|
||||
</span>
|
||||
</div>
|
||||
</div>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,309 @@
|
||||
"use client";
|
||||
|
||||
import { motion } from "framer-motion";
|
||||
import clsx from "clsx";
|
||||
import { useState, useEffect } from "react";
|
||||
|
||||
interface BackgroundCirclesProps {
|
||||
title?: string;
|
||||
description?: string;
|
||||
className?: string;
|
||||
variant?: keyof typeof COLOR_VARIANTS;
|
||||
audioLevel?: number;
|
||||
isActive?: boolean;
|
||||
}
|
||||
|
||||
const COLOR_VARIANTS = {
|
||||
primary: {
|
||||
border: [
|
||||
"border-emerald-500/60",
|
||||
"border-cyan-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-emerald-500/30",
|
||||
},
|
||||
secondary: {
|
||||
border: [
|
||||
"border-violet-500/60",
|
||||
"border-fuchsia-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-violet-500/30",
|
||||
},
|
||||
tertiary: {
|
||||
border: [
|
||||
"border-orange-500/60",
|
||||
"border-yellow-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-orange-500/30",
|
||||
},
|
||||
quaternary: {
|
||||
border: [
|
||||
"border-purple-500/60",
|
||||
"border-pink-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-purple-500/30",
|
||||
},
|
||||
quinary: {
|
||||
border: [
|
||||
"border-red-500/60",
|
||||
"border-rose-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-red-500/30",
|
||||
}, // red
|
||||
senary: {
|
||||
border: [
|
||||
"border-blue-500/60",
|
||||
"border-sky-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-blue-500/30",
|
||||
}, // blue
|
||||
septenary: {
|
||||
border: [
|
||||
"border-gray-500/60",
|
||||
"border-gray-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-gray-500/30",
|
||||
},
|
||||
octonary: {
|
||||
border: [
|
||||
"border-red-500/60",
|
||||
"border-rose-400/50",
|
||||
"border-slate-600/30",
|
||||
],
|
||||
gradient: "from-red-500/30",
|
||||
},
|
||||
} as const;
|
||||
|
||||
const AnimatedGrid = () => (
|
||||
<motion.div
|
||||
className="absolute inset-0 [mask-image:radial-gradient(ellipse_at_center,transparent_30%,black)]"
|
||||
animate={{
|
||||
backgroundPosition: ["0% 0%", "100% 100%"],
|
||||
}}
|
||||
transition={{
|
||||
duration: 40,
|
||||
repeat: Number.POSITIVE_INFINITY,
|
||||
ease: "linear",
|
||||
}}
|
||||
>
|
||||
<div className="h-full w-full [background-image:repeating-linear-gradient(100deg,#64748B_0%,#64748B_1px,transparent_1px,transparent_4%)] opacity-20" />
|
||||
</motion.div>
|
||||
);
|
||||
|
||||
export function BackgroundCircles({
|
||||
title = "",
|
||||
description = "",
|
||||
className,
|
||||
variant = "octonary",
|
||||
audioLevel = 0,
|
||||
isActive = false,
|
||||
}: BackgroundCirclesProps) {
|
||||
const variantStyles = COLOR_VARIANTS[variant];
|
||||
const [animationParams, setAnimationParams] = useState({
|
||||
scale: 1,
|
||||
duration: 5,
|
||||
intensity: 0
|
||||
});
|
||||
const [isLoaded, setIsLoaded] = useState(false);
|
||||
|
||||
// Initial page load animation
|
||||
useEffect(() => {
|
||||
// Small delay to ensure the black screen is visible first
|
||||
const timer = setTimeout(() => {
|
||||
setIsLoaded(true);
|
||||
}, 300);
|
||||
|
||||
return () => clearTimeout(timer);
|
||||
}, []);
|
||||
|
||||
// Update animation based on audio level
|
||||
useEffect(() => {
|
||||
if (isActive && audioLevel > 0) {
|
||||
// Simple enhancement of audio level for more dramatic effect
|
||||
const enhancedLevel = Math.min(1, audioLevel * 1.5);
|
||||
|
||||
setAnimationParams({
|
||||
scale: 1 + enhancedLevel * 0.3,
|
||||
duration: Math.max(2, 5 - enhancedLevel * 3),
|
||||
intensity: enhancedLevel
|
||||
});
|
||||
} else if (animationParams.intensity > 0) {
|
||||
// Only reset if we need to (prevents unnecessary updates)
|
||||
const timer = setTimeout(() => {
|
||||
setAnimationParams({
|
||||
scale: 1,
|
||||
duration: 5,
|
||||
intensity: 0
|
||||
});
|
||||
}, 300);
|
||||
|
||||
return () => clearTimeout(timer);
|
||||
}
|
||||
}, [audioLevel, isActive, animationParams.intensity]);
|
||||
|
||||
return (
|
||||
<>
|
||||
{/* Initial black overlay that fades out */}
|
||||
<motion.div
|
||||
className="fixed inset-0 bg-black z-50"
|
||||
initial={{ opacity: 1 }}
|
||||
animate={{ opacity: isLoaded ? 0 : 1 }}
|
||||
transition={{ duration: 1.2, ease: "easeInOut" }}
|
||||
style={{ pointerEvents: isLoaded ? "none" : "auto" }}
|
||||
/>
|
||||
|
||||
<div
|
||||
className={clsx(
|
||||
"relative flex h-screen w-full items-center justify-center overflow-hidden",
|
||||
"bg-white dark:bg-black/5",
|
||||
className
|
||||
)}
|
||||
>
|
||||
<AnimatedGrid />
|
||||
<motion.div
|
||||
className="absolute h-[480px] w-[480px]"
|
||||
initial={{ opacity: 0, scale: 0.9 }}
|
||||
animate={{
|
||||
opacity: isLoaded ? 1 : 0,
|
||||
scale: isLoaded ? 1 : 0.9
|
||||
}}
|
||||
transition={{
|
||||
duration: 1.5,
|
||||
delay: 0.3,
|
||||
ease: "easeOut"
|
||||
}}
|
||||
>
|
||||
{[0, 1, 2].map((i) => (
|
||||
<motion.div
|
||||
key={i}
|
||||
className={clsx(
|
||||
"absolute inset-0 rounded-full",
|
||||
"border-2 bg-gradient-to-br to-transparent",
|
||||
variantStyles.border[i],
|
||||
variantStyles.gradient
|
||||
)}
|
||||
animate={{
|
||||
rotate: 360,
|
||||
scale: [
|
||||
1 + (i * 0.05),
|
||||
(1 + (i * 0.05)) * (1 + (isActive ? animationParams.intensity * 0.2 : 0.02)),
|
||||
1 + (i * 0.05)
|
||||
],
|
||||
opacity: [
|
||||
0.7 + (i * 0.1),
|
||||
0.8 + (i * 0.1) + (isActive ? animationParams.intensity * 0.2 : 0),
|
||||
0.7 + (i * 0.1)
|
||||
]
|
||||
}}
|
||||
transition={{
|
||||
duration: isActive ? animationParams.duration : 8 + (i * 2),
|
||||
repeat: Number.POSITIVE_INFINITY,
|
||||
ease: "easeInOut",
|
||||
}}
|
||||
>
|
||||
<div
|
||||
className={clsx(
|
||||
"absolute inset-0 rounded-full mix-blend-screen",
|
||||
`bg-[radial-gradient(ellipse_at_center,${variantStyles.gradient.replace(
|
||||
"from-",
|
||||
""
|
||||
)}/10%,transparent_70%)]`
|
||||
)}
|
||||
/>
|
||||
</motion.div>
|
||||
))}
|
||||
</motion.div>
|
||||
|
||||
<div className="absolute inset-0 [mask-image:radial-gradient(90%_60%_at_50%_50%,#000_40%,transparent)]">
|
||||
<motion.div
|
||||
className="absolute inset-0 bg-[radial-gradient(ellipse_at_center,#0F766E/30%,transparent_70%)] blur-[120px]"
|
||||
initial={{ opacity: 0 }}
|
||||
animate={{
|
||||
opacity: isLoaded ? 0.7 : 0,
|
||||
scale: [1, 1 + (isActive ? animationParams.intensity * 0.3 : 0.02), 1],
|
||||
}}
|
||||
transition={{
|
||||
opacity: { duration: 1.8, delay: 0.5 },
|
||||
scale: {
|
||||
duration: isActive ? 2 : 12,
|
||||
repeat: Number.POSITIVE_INFINITY,
|
||||
ease: "easeInOut",
|
||||
}
|
||||
}}
|
||||
/>
|
||||
<motion.div
|
||||
className="absolute inset-0 bg-[radial-gradient(ellipse_at_center,#2DD4BF/15%,transparent)] blur-[80px]"
|
||||
initial={{ opacity: 0 }}
|
||||
animate={{
|
||||
opacity: isLoaded ? 1 : 0,
|
||||
scale: [1, 1 + (isActive ? animationParams.intensity * 0.4 : 0.03), 1]
|
||||
}}
|
||||
transition={{
|
||||
opacity: { duration: 2, delay: 0.7 },
|
||||
scale: {
|
||||
duration: isActive ? 1.5 : 15,
|
||||
repeat: Number.POSITIVE_INFINITY,
|
||||
ease: "easeInOut",
|
||||
}
|
||||
}}
|
||||
/>
|
||||
|
||||
{/* Additional glow that appears only during high audio levels */}
|
||||
{isActive && animationParams.intensity > 0.4 && (
|
||||
<motion.div
|
||||
className={`absolute inset-0 bg-[radial-gradient(ellipse_at_center,${variantStyles.gradient.replace("from-", "")}/20%,transparent_70%)] blur-[60px]`}
|
||||
initial={{ opacity: 0, scale: 0.8 }}
|
||||
animate={{
|
||||
opacity: [0, animationParams.intensity * 0.6, 0],
|
||||
scale: [0.8, 1.1, 0.8],
|
||||
}}
|
||||
transition={{
|
||||
duration: 0.8,
|
||||
repeat: Number.POSITIVE_INFINITY,
|
||||
ease: "easeInOut",
|
||||
}}
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
}
|
||||
|
||||
export function DemoCircles() {
|
||||
const [currentVariant, setCurrentVariant] =
|
||||
useState<keyof typeof COLOR_VARIANTS>("octonary");
|
||||
|
||||
const variants = Object.keys(
|
||||
COLOR_VARIANTS
|
||||
) as (keyof typeof COLOR_VARIANTS)[];
|
||||
|
||||
function getNextVariant() {
|
||||
const currentIndex = variants.indexOf(currentVariant);
|
||||
const nextVariant = variants[(currentIndex + 1) % variants.length];
|
||||
return nextVariant;
|
||||
}
|
||||
|
||||
return (
|
||||
<>
|
||||
<BackgroundCircles variant={currentVariant} />
|
||||
<div className="absolute top-12 right-12">
|
||||
<button
|
||||
type="button"
|
||||
className="bg-slate-950 dark:bg-white text-white dark:text-slate-950 px-4 py-1 rounded-md z-10 text-sm font-medium"
|
||||
onClick={() => {
|
||||
setCurrentVariant(getNextVariant());
|
||||
}}
|
||||
>
|
||||
Change Variant
|
||||
</button>
|
||||
</div>
|
||||
</>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,18 @@
|
||||
"use client"
|
||||
|
||||
import { Trash } from "lucide-react"
|
||||
|
||||
export function ResetChat() {
|
||||
return (
|
||||
<button
|
||||
className="w-10 h-10 rounded-md flex items-center justify-center transition-colors relative overflow-hidden bg-black/10 hover:bg-black/20 dark:bg-white/10 dark:hover:bg-white/20"
|
||||
aria-label="Reset chat"
|
||||
onClick={() => fetch("http://localhost:8000/reset")}
|
||||
>
|
||||
<div className="relative z-10">
|
||||
<Trash className="h-5 w-5 text-black/70 dark:text-white/70" />
|
||||
</div>
|
||||
</button>
|
||||
)
|
||||
}
|
||||
|
||||
@@ -0,0 +1,61 @@
|
||||
"use client";
|
||||
|
||||
import { useTheme } from "@/components/theme-provider";
|
||||
import { cn } from "@/lib/utils";
|
||||
import { Moon, Sun } from "lucide-react";
|
||||
import { useRef } from "react";
|
||||
|
||||
interface ThemeToggleProps {
|
||||
className?: string;
|
||||
}
|
||||
|
||||
export function ThemeToggle({ className }: ThemeToggleProps) {
|
||||
const { theme } = useTheme();
|
||||
const buttonRef = useRef<HTMLButtonElement>(null);
|
||||
|
||||
const toggleTheme = () => {
|
||||
// Instead of directly changing the theme, dispatch a custom event
|
||||
const newTheme = theme === "light" ? "dark" : "light";
|
||||
|
||||
// Dispatch custom event with the new theme
|
||||
window.dispatchEvent(
|
||||
new CustomEvent('themeToggleRequest', {
|
||||
detail: { theme: newTheme }
|
||||
})
|
||||
);
|
||||
};
|
||||
|
||||
return (
|
||||
<button
|
||||
ref={buttonRef}
|
||||
onClick={toggleTheme}
|
||||
className={cn(
|
||||
"w-10 h-10 rounded-md flex items-center justify-center transition-colors relative overflow-hidden",
|
||||
"bg-black/10 hover:bg-black/20 dark:bg-white/10 dark:hover:bg-white/20",
|
||||
className
|
||||
)}
|
||||
aria-label="Toggle theme"
|
||||
>
|
||||
<div className="relative z-10">
|
||||
{theme === "light" ? (
|
||||
<Moon className="h-5 w-5 text-black/70" />
|
||||
) : (
|
||||
<Sun className="h-5 w-5 text-white/70" />
|
||||
)}
|
||||
</div>
|
||||
|
||||
{/* Small inner animation for the button itself */}
|
||||
<div
|
||||
className={cn(
|
||||
"absolute inset-0 transition-transform duration-500",
|
||||
theme === "light"
|
||||
? "bg-gradient-to-br from-blue-500/20 to-purple-500/20 translate-y-full"
|
||||
: "bg-gradient-to-br from-amber-500/20 to-orange-500/20 -translate-y-full"
|
||||
)}
|
||||
style={{
|
||||
transitionTimingFunction: "cubic-bezier(0.22, 1, 0.36, 1)"
|
||||
}}
|
||||
/>
|
||||
</button>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,120 @@
|
||||
"use client";
|
||||
|
||||
import { useTheme } from "@/components/theme-provider";
|
||||
import { useEffect, useState } from "react";
|
||||
import { motion, AnimatePresence } from "framer-motion";
|
||||
|
||||
interface ThemeTransitionProps {
|
||||
className?: string;
|
||||
}
|
||||
|
||||
export function ThemeTransition({ className }: ThemeTransitionProps) {
|
||||
const { theme, setTheme } = useTheme();
|
||||
const [position, setPosition] = useState({ x: 0, y: 0 });
|
||||
const [isAnimating, setIsAnimating] = useState(false);
|
||||
const [pendingTheme, setPendingTheme] = useState<string | null>(null);
|
||||
const [visualTheme, setVisualTheme] = useState<string | null>(theme);
|
||||
|
||||
// Track mouse/touch position for click events
|
||||
useEffect(() => {
|
||||
const handleMouseMove = (e: MouseEvent) => {
|
||||
setPosition({ x: e.clientX, y: e.clientY });
|
||||
};
|
||||
|
||||
const handleTouchMove = (e: TouchEvent) => {
|
||||
if (e.touches[0]) {
|
||||
setPosition({ x: e.touches[0].clientX, y: e.touches[0].clientY });
|
||||
}
|
||||
};
|
||||
|
||||
window.addEventListener("mousemove", handleMouseMove);
|
||||
window.addEventListener("touchmove", handleTouchMove);
|
||||
|
||||
return () => {
|
||||
window.removeEventListener("mousemove", handleMouseMove);
|
||||
window.removeEventListener("touchmove", handleTouchMove);
|
||||
};
|
||||
}, []);
|
||||
|
||||
// Listen for theme toggle requests
|
||||
useEffect(() => {
|
||||
// Custom event for theme toggle requests
|
||||
const handleThemeToggle = (e: CustomEvent) => {
|
||||
if (isAnimating) return; // Prevent multiple animations
|
||||
|
||||
const newTheme = e.detail.theme;
|
||||
if (newTheme === theme) return;
|
||||
|
||||
// Store the pending theme but don't apply it yet
|
||||
setPendingTheme(newTheme);
|
||||
setIsAnimating(true);
|
||||
|
||||
// The actual theme will be applied mid-animation
|
||||
};
|
||||
|
||||
window.addEventListener('themeToggleRequest' as any, handleThemeToggle as EventListener);
|
||||
|
||||
return () => {
|
||||
window.removeEventListener('themeToggleRequest' as any, handleThemeToggle as EventListener);
|
||||
};
|
||||
}, [theme, isAnimating]);
|
||||
|
||||
// Apply the theme change mid-animation
|
||||
useEffect(() => {
|
||||
if (isAnimating && pendingTheme) {
|
||||
// Set visual theme immediately for the animation
|
||||
setVisualTheme(pendingTheme);
|
||||
|
||||
// Apply the actual theme change after a delay (mid-animation)
|
||||
const timer = setTimeout(() => {
|
||||
setTheme(pendingTheme as any);
|
||||
}, 400); // Half of the animation duration
|
||||
|
||||
// End the animation after it completes
|
||||
const endTimer = setTimeout(() => {
|
||||
setIsAnimating(false);
|
||||
setPendingTheme(null);
|
||||
}, 1000); // Match with animation duration
|
||||
|
||||
return () => {
|
||||
clearTimeout(timer);
|
||||
clearTimeout(endTimer);
|
||||
};
|
||||
}
|
||||
}, [isAnimating, pendingTheme, setTheme]);
|
||||
|
||||
return (
|
||||
<AnimatePresence>
|
||||
{isAnimating && (
|
||||
<motion.div
|
||||
className="fixed inset-0 z-[9999] pointer-events-none"
|
||||
initial={{ opacity: 0 }}
|
||||
animate={{ opacity: 1 }}
|
||||
exit={{ opacity: 0 }}
|
||||
transition={{ duration: 0.3 }}
|
||||
>
|
||||
<motion.div
|
||||
className={`absolute rounded-full ${visualTheme === 'dark' ? 'bg-slate-950' : 'bg-white'}`}
|
||||
initial={{
|
||||
width: 0,
|
||||
height: 0,
|
||||
x: position.x,
|
||||
y: position.y,
|
||||
borderRadius: '100%'
|
||||
}}
|
||||
animate={{
|
||||
width: Math.max(window.innerWidth * 3, window.innerHeight * 3),
|
||||
height: Math.max(window.innerWidth * 3, window.innerHeight * 3),
|
||||
x: position.x - Math.max(window.innerWidth * 3, window.innerHeight * 3) / 2,
|
||||
y: position.y - Math.max(window.innerWidth * 3, window.innerHeight * 3) / 2,
|
||||
}}
|
||||
transition={{
|
||||
duration: 0.8,
|
||||
ease: [0.22, 1, 0.36, 1]
|
||||
}}
|
||||
/>
|
||||
</motion.div>
|
||||
)}
|
||||
</AnimatePresence>
|
||||
);
|
||||
}
|
||||
@@ -0,0 +1,28 @@
|
||||
import { dirname } from "path";
|
||||
import { fileURLToPath } from "url";
|
||||
import { FlatCompat } from "@eslint/eslintrc";
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
|
||||
const compat = new FlatCompat({
|
||||
baseDirectory: __dirname,
|
||||
});
|
||||
|
||||
const eslintConfig = [
|
||||
...compat.extends("next/core-web-vitals", "next/typescript"),
|
||||
{
|
||||
rules: {
|
||||
"no-unused-vars": "off",
|
||||
"no-explicit-any": "off",
|
||||
"no-console": "off",
|
||||
"no-debugger": "off",
|
||||
"eqeqeq": "off",
|
||||
"curly": "off",
|
||||
"quotes": "off",
|
||||
"semi": "off",
|
||||
},
|
||||
},
|
||||
];
|
||||
|
||||
export default eslintConfig;
|
||||
@@ -0,0 +1,6 @@
|
||||
import { clsx, type ClassValue } from "clsx"
|
||||
import { twMerge } from "tailwind-merge"
|
||||
|
||||
export function cn(...inputs: ClassValue[]) {
|
||||
return twMerge(clsx(inputs))
|
||||
}
|
||||
@@ -0,0 +1,189 @@
|
||||
interface WebRTCClientOptions {
|
||||
onConnected?: () => void;
|
||||
onDisconnected?: () => void;
|
||||
onMessage?: (message: any) => void;
|
||||
onAudioStream?: (stream: MediaStream) => void;
|
||||
onAudioLevel?: (level: number) => void;
|
||||
}
|
||||
|
||||
export class WebRTCClient {
|
||||
private peerConnection: RTCPeerConnection | null = null;
|
||||
private mediaStream: MediaStream | null = null;
|
||||
private dataChannel: RTCDataChannel | null = null;
|
||||
private options: WebRTCClientOptions;
|
||||
private audioContext: AudioContext | null = null;
|
||||
private analyser: AnalyserNode | null = null;
|
||||
private dataArray: Uint8Array | null = null;
|
||||
private animationFrameId: number | null = null;
|
||||
|
||||
constructor(options: WebRTCClientOptions = {}) {
|
||||
this.options = options;
|
||||
}
|
||||
|
||||
async connect() {
|
||||
try {
|
||||
this.peerConnection = new RTCPeerConnection();
|
||||
|
||||
// Get user media
|
||||
try {
|
||||
this.mediaStream = await navigator.mediaDevices.getUserMedia({
|
||||
audio: true
|
||||
});
|
||||
} catch (mediaError: any) {
|
||||
console.error('Media error:', mediaError);
|
||||
if (mediaError.name === 'NotAllowedError') {
|
||||
throw new Error('Microphone access denied. Please allow microphone access and try again.');
|
||||
} else if (mediaError.name === 'NotFoundError') {
|
||||
throw new Error('No microphone detected. Please connect a microphone and try again.');
|
||||
} else {
|
||||
throw mediaError;
|
||||
}
|
||||
}
|
||||
|
||||
this.setupAudioAnalysis();
|
||||
|
||||
this.mediaStream.getTracks().forEach(track => {
|
||||
if (this.peerConnection) {
|
||||
this.peerConnection.addTrack(track, this.mediaStream!);
|
||||
}
|
||||
});
|
||||
|
||||
this.peerConnection.addEventListener('track', (event) => {
|
||||
if (this.options.onAudioStream) {
|
||||
this.options.onAudioStream(event.streams[0]);
|
||||
}
|
||||
});
|
||||
|
||||
this.dataChannel = this.peerConnection.createDataChannel('text');
|
||||
|
||||
this.dataChannel.addEventListener('message', (event) => {
|
||||
try {
|
||||
const message = JSON.parse(event.data);
|
||||
console.log('Received message:', message);
|
||||
|
||||
if (this.options.onMessage) {
|
||||
this.options.onMessage(message);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error parsing message:', error);
|
||||
}
|
||||
});
|
||||
|
||||
// Create and send offer
|
||||
const offer = await this.peerConnection.createOffer();
|
||||
await this.peerConnection.setLocalDescription(offer);
|
||||
|
||||
// Use same-origin request to avoid CORS preflight
|
||||
const response = await fetch('http://localhost:8000/webrtc/offer', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'Accept': 'application/json'
|
||||
},
|
||||
mode: 'cors', // Explicitly set CORS mode
|
||||
credentials: 'same-origin',
|
||||
body: JSON.stringify({
|
||||
sdp: offer.sdp,
|
||||
type: offer.type,
|
||||
webrtc_id: Math.random().toString(36).substring(7)
|
||||
})
|
||||
});
|
||||
|
||||
const serverResponse = await response.json();
|
||||
await this.peerConnection.setRemoteDescription(serverResponse);
|
||||
|
||||
if (this.options.onConnected) {
|
||||
this.options.onConnected();
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error connecting:', error);
|
||||
this.disconnect();
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
private setupAudioAnalysis() {
|
||||
if (!this.mediaStream) return;
|
||||
|
||||
try {
|
||||
this.audioContext = new AudioContext();
|
||||
this.analyser = this.audioContext.createAnalyser();
|
||||
this.analyser.fftSize = 256;
|
||||
|
||||
const source = this.audioContext.createMediaStreamSource(this.mediaStream);
|
||||
source.connect(this.analyser);
|
||||
|
||||
const bufferLength = this.analyser.frequencyBinCount;
|
||||
this.dataArray = new Uint8Array(bufferLength);
|
||||
|
||||
this.startAnalysis();
|
||||
} catch (error) {
|
||||
console.error('Error setting up audio analysis:', error);
|
||||
}
|
||||
}
|
||||
|
||||
private startAnalysis() {
|
||||
if (!this.analyser || !this.dataArray || !this.options.onAudioLevel) return;
|
||||
|
||||
// Add throttling to prevent too many updates
|
||||
let lastUpdateTime = 0;
|
||||
const throttleInterval = 100; // Only update every 100ms
|
||||
|
||||
const analyze = () => {
|
||||
this.analyser!.getByteFrequencyData(this.dataArray!);
|
||||
|
||||
const currentTime = Date.now();
|
||||
// Only update if enough time has passed since last update
|
||||
if (currentTime - lastUpdateTime > throttleInterval) {
|
||||
// Calculate average volume level (0-1)
|
||||
let sum = 0;
|
||||
for (let i = 0; i < this.dataArray!.length; i++) {
|
||||
sum += this.dataArray![i];
|
||||
}
|
||||
const average = sum / this.dataArray!.length / 255;
|
||||
|
||||
this.options.onAudioLevel!(average);
|
||||
lastUpdateTime = currentTime;
|
||||
}
|
||||
|
||||
this.animationFrameId = requestAnimationFrame(analyze);
|
||||
};
|
||||
|
||||
this.animationFrameId = requestAnimationFrame(analyze);
|
||||
}
|
||||
|
||||
private stopAnalysis() {
|
||||
if (this.animationFrameId !== null) {
|
||||
cancelAnimationFrame(this.animationFrameId);
|
||||
this.animationFrameId = null;
|
||||
}
|
||||
|
||||
if (this.audioContext) {
|
||||
this.audioContext.close();
|
||||
this.audioContext = null;
|
||||
}
|
||||
|
||||
this.analyser = null;
|
||||
this.dataArray = null;
|
||||
}
|
||||
|
||||
disconnect() {
|
||||
this.stopAnalysis();
|
||||
|
||||
if (this.mediaStream) {
|
||||
this.mediaStream.getTracks().forEach(track => track.stop());
|
||||
this.mediaStream = null;
|
||||
}
|
||||
|
||||
if (this.peerConnection) {
|
||||
this.peerConnection.close();
|
||||
this.peerConnection = null;
|
||||
}
|
||||
|
||||
this.dataChannel = null;
|
||||
|
||||
if (this.options.onDisconnected) {
|
||||
this.options.onDisconnected();
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
import type { NextConfig } from "next";
|
||||
|
||||
const nextConfig: NextConfig = {
|
||||
/* config options here */
|
||||
};
|
||||
|
||||
export default nextConfig;
|
||||
33
demo/nextjs_voice_chat/frontend/fastrtc-demo/package.json
Normal file
33
demo/nextjs_voice_chat/frontend/fastrtc-demo/package.json
Normal file
@@ -0,0 +1,33 @@
|
||||
{
|
||||
"name": "fastrtc-demo",
|
||||
"version": "0.1.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"dev": "next dev --turbopack",
|
||||
"build": "next build --no-lint",
|
||||
"start": "next start",
|
||||
"lint": "next lint"
|
||||
},
|
||||
"dependencies": {
|
||||
"class-variance-authority": "^0.7.1",
|
||||
"clsx": "^2.1.1",
|
||||
"framer-motion": "^12.4.10",
|
||||
"lucide-react": "^0.477.0",
|
||||
"next": "15.2.2-canary.1",
|
||||
"react": "^19.0.0",
|
||||
"react-dom": "^19.0.0",
|
||||
"tailwind-merge": "^3.0.2",
|
||||
"tailwindcss-animate": "^1.0.7"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@eslint/eslintrc": "^3",
|
||||
"@tailwindcss/postcss": "^4",
|
||||
"@types/node": "^20",
|
||||
"@types/react": "^19",
|
||||
"@types/react-dom": "^19",
|
||||
"eslint": "^9",
|
||||
"eslint-config-next": "15.2.2-canary.1",
|
||||
"tailwindcss": "^4",
|
||||
"typescript": "^5"
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,5 @@
|
||||
const config = {
|
||||
plugins: ["@tailwindcss/postcss"],
|
||||
};
|
||||
|
||||
export default config;
|
||||
@@ -0,0 +1 @@
|
||||
<svg fill="none" viewBox="0 0 16 16" xmlns="http://www.w3.org/2000/svg"><path d="M14.5 13.5V5.41a1 1 0 0 0-.3-.7L9.8.29A1 1 0 0 0 9.08 0H1.5v13.5A2.5 2.5 0 0 0 4 16h8a2.5 2.5 0 0 0 2.5-2.5m-1.5 0v-7H8v-5H3v12a1 1 0 0 0 1 1h8a1 1 0 0 0 1-1M9.5 5V2.12L12.38 5zM5.13 5h-.62v1.25h2.12V5zm-.62 3h7.12v1.25H4.5zm.62 3h-.62v1.25h7.12V11z" clip-rule="evenodd" fill="#666" fill-rule="evenodd"/></svg>
|
||||
|
After Width: | Height: | Size: 391 B |
@@ -0,0 +1 @@
|
||||
<svg fill="none" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16"><g clip-path="url(#a)"><path fill-rule="evenodd" clip-rule="evenodd" d="M10.27 14.1a6.5 6.5 0 0 0 3.67-3.45q-1.24.21-2.7.34-.31 1.83-.97 3.1M8 16A8 8 0 1 0 8 0a8 8 0 0 0 0 16m.48-1.52a7 7 0 0 1-.96 0H7.5a4 4 0 0 1-.84-1.32q-.38-.89-.63-2.08a40 40 0 0 0 3.92 0q-.25 1.2-.63 2.08a4 4 0 0 1-.84 1.31zm2.94-4.76q1.66-.15 2.95-.43a7 7 0 0 0 0-2.58q-1.3-.27-2.95-.43a18 18 0 0 1 0 3.44m-1.27-3.54a17 17 0 0 1 0 3.64 39 39 0 0 1-4.3 0 17 17 0 0 1 0-3.64 39 39 0 0 1 4.3 0m1.1-1.17q1.45.13 2.69.34a6.5 6.5 0 0 0-3.67-3.44q.65 1.26.98 3.1M8.48 1.5l.01.02q.41.37.84 1.31.38.89.63 2.08a40 40 0 0 0-3.92 0q.25-1.2.63-2.08a4 4 0 0 1 .85-1.32 7 7 0 0 1 .96 0m-2.75.4a6.5 6.5 0 0 0-3.67 3.44 29 29 0 0 1 2.7-.34q.31-1.83.97-3.1M4.58 6.28q-1.66.16-2.95.43a7 7 0 0 0 0 2.58q1.3.27 2.95.43a18 18 0 0 1 0-3.44m.17 4.71q-1.45-.12-2.69-.34a6.5 6.5 0 0 0 3.67 3.44q-.65-1.27-.98-3.1" fill="#666"/></g><defs><clipPath id="a"><path fill="#fff" d="M0 0h16v16H0z"/></clipPath></defs></svg>
|
||||
|
After Width: | Height: | Size: 1.0 KiB |
@@ -0,0 +1 @@
|
||||
<svg xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 394 80"><path fill="#000" d="M262 0h68.5v12.7h-27.2v66.6h-13.6V12.7H262V0ZM149 0v12.7H94v20.4h44.3v12.6H94v21h55v12.6H80.5V0h68.7zm34.3 0h-17.8l63.8 79.4h17.9l-32-39.7 32-39.6h-17.9l-23 28.6-23-28.6zm18.3 56.7-9-11-27.1 33.7h17.8l18.3-22.7z"/><path fill="#000" d="M81 79.3 17 0H0v79.3h13.6V17l50.2 62.3H81Zm252.6-.4c-1 0-1.8-.4-2.5-1s-1.1-1.6-1.1-2.6.3-1.8 1-2.5 1.6-1 2.6-1 1.8.3 2.5 1a3.4 3.4 0 0 1 .6 4.3 3.7 3.7 0 0 1-3 1.8zm23.2-33.5h6v23.3c0 2.1-.4 4-1.3 5.5a9.1 9.1 0 0 1-3.8 3.5c-1.6.8-3.5 1.3-5.7 1.3-2 0-3.7-.4-5.3-1s-2.8-1.8-3.7-3.2c-.9-1.3-1.4-3-1.4-5h6c.1.8.3 1.6.7 2.2s1 1.2 1.6 1.5c.7.4 1.5.5 2.4.5 1 0 1.8-.2 2.4-.6a4 4 0 0 0 1.6-1.8c.3-.8.5-1.8.5-3V45.5zm30.9 9.1a4.4 4.4 0 0 0-2-3.3 7.5 7.5 0 0 0-4.3-1.1c-1.3 0-2.4.2-3.3.5-.9.4-1.6 1-2 1.6a3.5 3.5 0 0 0-.3 4c.3.5.7.9 1.3 1.2l1.8 1 2 .5 3.2.8c1.3.3 2.5.7 3.7 1.2a13 13 0 0 1 3.2 1.8 8.1 8.1 0 0 1 3 6.5c0 2-.5 3.7-1.5 5.1a10 10 0 0 1-4.4 3.5c-1.8.8-4.1 1.2-6.8 1.2-2.6 0-4.9-.4-6.8-1.2-2-.8-3.4-2-4.5-3.5a10 10 0 0 1-1.7-5.6h6a5 5 0 0 0 3.5 4.6c1 .4 2.2.6 3.4.6 1.3 0 2.5-.2 3.5-.6 1-.4 1.8-1 2.4-1.7a4 4 0 0 0 .8-2.4c0-.9-.2-1.6-.7-2.2a11 11 0 0 0-2.1-1.4l-3.2-1-3.8-1c-2.8-.7-5-1.7-6.6-3.2a7.2 7.2 0 0 1-2.4-5.7 8 8 0 0 1 1.7-5 10 10 0 0 1 4.3-3.5c2-.8 4-1.2 6.4-1.2 2.3 0 4.4.4 6.2 1.2 1.8.8 3.2 2 4.3 3.4 1 1.4 1.5 3 1.5 5h-5.8z"/></svg>
|
||||
|
After Width: | Height: | Size: 1.3 KiB |
@@ -0,0 +1 @@
|
||||
<svg fill="none" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1155 1000"><path d="m577.3 0 577.4 1000H0z" fill="#fff"/></svg>
|
||||
|
After Width: | Height: | Size: 128 B |
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user