gs对话接入

本次代码评审新增并完善了gs视频聊天功能,包括前后端接口定义、状态管理及UI组件实现,并引入了新的依赖库以支持更多互动特性。
Link: https://code.alibaba-inc.com/xr-paas/gradio_webrtc/codereview/21273476
* 更新python 部分

* 合并videochat前端部分

* Merge branch 'feature/update-fastrtc-0.0.19' of http://gitlab.alibaba-inc.com/xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 替换audiowave

* 导入路径修改

* 合并websocket mode逻辑

* feat: gaussian avatar chat

* 增加其他渲染的入参

* feat: ws连接和使用

* Merge branch 'feature/update-fastrtc-0.0.19' of http://gitlab.alibaba-inc.com/xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 右边距离超出容器宽度,则向左移动

* 配置传递

* Merge branch 'feature/update-fastrtc-0.0.19' of gitlab.alibaba-inc.com:xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 高斯包异常

* 同步webrtc_utils

* 更新webrtc_utils

* 兼容on_chat_datachannel

* 修复设备名称列表没有正常显示的问题

* copy 传递 webrtc_id

* Merge branch 'feature/update-fastrtc-0.0.19' of gitlab.alibaba-inc.com:xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 保证webrtc 完成后再进行websocket连接

* feat: 音频表情数据接入

* dist 上传

* canvas 隐藏

* feat: 高斯文件下载进度透出

* Merge branch 'feature/update-fastrtc-0.0.19' of http://gitlab.alibaba-inc.com/xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 修改无法获取权限问题

* Merge branch 'feature/update-fastrtc-0.0.19' of gitlab.alibaba-inc.com:xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 先获取权限再获取设备

* fix: gs资源下载完成前不处理ws数据

* fix: merge

* 话术调整

* Merge branch 'feature/update-fastrtc-0.0.19' of gitlab.alibaba-inc.com:xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 修复设备切换后重新对话,又切换回默认设备的问题

* Merge branch 'feature/update-fastrtc-0.0.19' of http://gitlab.alibaba-inc.com/xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 更新localvideo 尺寸

* Merge branch 'feature/update-fastrtc-0.0.19' of gitlab.alibaba-inc.com:xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 不能默认default

* 修改音频权限问题

* 更新打包结果

* fix: 对话按钮状态跟gs资源挂钩,删除无用代码

* fix: merge

* feat: gs渲染模块从npm包引入

* fix

* 新增对话记录

* Merge branch 'feature/update-fastrtc-0.0.19' of http://gitlab.alibaba-inc.com/xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 样式修改

* 更新包

* fix: gs数字人初始化位置和静音

* 对话记录滚到底部

* 至少100%高度

* Merge branch 'feature/update-fastrtc-0.0.19' of gitlab.alibaba-inc.com:xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 略微上移文本框

* 开始连接时清空对话记录

* fix: update gs render npm

* Merge branch 'feature/update-fastrtc-0.0.19' of http://gitlab.alibaba-inc.com/xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 逻辑保证

* Merge branch 'feature/update-fastrtc-0.0.19' of gitlab.alibaba-inc.com:xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* feat: 音频初始化配置是否静音

* actionsbar在有字幕时调整位置

* Merge branch 'feature/update-fastrtc-0.0.19' of http://gitlab.alibaba-inc.com/xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 样式优化

* feat: 增加readme

* fix: 资源图片

* fix: docs

* fix: update gs render sdk

* fix: gs模式下画面位置计算

* fix: update readme

* 设备判断,太窄处理

* Merge branch 'feature/update-fastrtc-0.0.19' of gitlab.alibaba-inc.com:xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* 是否有权限和是否有设备分开

* feat: gs 下载和加载钩子函数分离

* Merge branch 'feature/update-fastrtc-0.0.19' of http://gitlab.alibaba-inc.com/xr-paas/gradio_webrtc into feature/update-fastrtc-0.0.19

* fix: update gs render sdk

* 替换

* dist

* 上传文件

* del
This commit is contained in:
neil.xh
2025-04-16 19:09:04 +08:00
parent 06885d06c4
commit f476f9cf29
54 changed files with 4980 additions and 23257 deletions

426
README.md
View File

@@ -7,311 +7,191 @@
<div style="display: flex; flex-direction: row; justify-content: center">
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/fastrtc">
<a href="https://github.com/freddyaboulton/fastrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
<a href="https://github.com/freddyaboulton/fastrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a><a href="https://fastrtc.org//" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/Docs-ffcf40"></a>
</div>
</div>
<div align="center">
<strong>中文|<a href="./README_EN.md">English</a></strong>
</div>
<h3 style='text-align: center'>
The Real-Time Communication Library for Python.
</h3>
本仓库是从原有的 fastrtc 仓库 fork 而来,主要增加了`video_chat`作为允许的入参,并默认开启,这个模式和原有的`modality="audio-video"``mode="send-receive"`的行为保持一致,但重写了 UI 部分,增加了更多的交互能力(更多的麦克风操作,同时展示本地视频信息),其视觉表现如下图。
如果手动将`video_chat`参数设置为`False`,则其用法与原仓库保持一致 [https://github.com/freddyaboulton/fastrtc/](https://github.com/freddyaboulton/fastrtc)
![picture-in-picture](docs/image.png)
![side-by-side](docs/image2.png)
## Configuration
|参数|默认值|说明|
|---|---|---|
|video_chat|True|是否开启视频聊天功能|
|avatar_type|''|端渲染数字人的类型,目前只支持'gs'|
|avatar_ws_route|''|端渲染websocket连接路径|
|avatar_assets_path|''|端渲染数字人的模型资源地址|
Turn any python function into a real-time audio and video stream over WebRTC or WebSockets.
## Installation
```bash
pip install fastrtc
gradio cc install
gradio cc build --no-generate-docs
```
to use built-in pause detection (see [ReplyOnPause](https://fastrtc.org/userguide/audio/#reply-on-pause)), and text to speech (see [Text To Speech](https://fastrtc.org/userguide/audio/#text-to-speech)), install the `vad` and `tts` extras:
```bash
pip install "fastrtc[vad, tts]"
pip install dist/fastrtc-0.0.19.dev0-py3-none-any.whl
```
## Key Features
- 🗣️ Automatic Voice Detection and Turn Taking built-in, only worry about the logic for responding to the user.
- 💻 Automatic UI - Use the `.ui.launch()` method to launch the webRTC-enabled built-in Gradio UI.
- 🔌 Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!
- ⚡️ Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!
- 📞 Automatic Telephone Support - Use the `fastphone()` method of the stream to launch the application and get a free temporary phone number!
- 🤖 Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example on how to serve a custom JS frontend.
## Docs
[https://fastrtc.org](https://fastrtc.org)
## Examples
See the [Cookbook](https://fastrtc.org/cookbook/) for examples of how to use the library.
<table>
<tr>
<td width="50%">
<h3>🗣️👀 Gemini Audio Video Chat</h3>
<p>Stream BOTH your webcam video and audio feeds to Google Gemini. You can also upload images to augment your conversation!</p>
<video width="100%" src="https://github.com/user-attachments/assets/9636dc97-4fee-46bb-abb8-b92e69c08c71" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/gemini-audio-video-chat/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>🗣️ Google Gemini Real Time Voice API</h3>
<p>Talk to Gemini in real time using Google's voice API.</p>
<video width="100%" src="https://github.com/user-attachments/assets/ea6d18cb-8589-422b-9bba-56332d9f61de" controls></video>
<p>
<a href="https://huggingface.co/spaces/fastrtc/talk-to-gemini">Demo</a> |
<a href="https://huggingface.co/spaces/fastrtc/talk-to-gemini/blob/main/app.py">Code</a>
</p>
</td>
</tr>
<tr>
<td width="50%">
<h3>🗣️ OpenAI Real Time Voice API</h3>
<p>Talk to ChatGPT in real time using OpenAI's voice API.</p>
<video width="100%" src="https://github.com/user-attachments/assets/178bdadc-f17b-461a-8d26-e915c632ff80" controls></video>
<p>
<a href="https://huggingface.co/spaces/fastrtc/talk-to-openai">Demo</a> |
<a href="https://huggingface.co/spaces/fastrtc/talk-to-openai/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>🤖 Hello Computer</h3>
<p>Say computer before asking your question!</p>
<video width="100%" src="https://github.com/user-attachments/assets/afb2a3ef-c1ab-4cfb-872d-578f895a10d5" controls></video>
<p>
<a href="https://huggingface.co/spaces/fastrtc/hello-computer">Demo</a> |
<a href="https://huggingface.co/spaces/fastrtc/hello-computer/blob/main/app.py">Code</a>
</p>
</td>
</tr>
<tr>
<td width="50%">
<h3>🤖 Llama Code Editor</h3>
<p>Create and edit HTML pages with just your voice! Powered by SambaNova systems.</p>
<video width="100%" src="https://github.com/user-attachments/assets/98523cf3-dac8-4127-9649-d91a997e3ef5" controls></video>
<p>
<a href="https://huggingface.co/spaces/fastrtc/llama-code-editor">Demo</a> |
<a href="https://huggingface.co/spaces/fastrtc/llama-code-editor/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>🗣️ Talk to Claude</h3>
<p>Use the Anthropic and Play.Ht APIs to have an audio conversation with Claude.</p>
<video width="100%" src="https://github.com/user-attachments/assets/fb6ef07f-3ccd-444a-997b-9bc9bdc035d3" controls></video>
<p>
<a href="https://huggingface.co/spaces/fastrtc/talk-to-claude">Demo</a> |
<a href="https://huggingface.co/spaces/fastrtc/talk-to-claude/blob/main/app.py">Code</a>
</p>
</td>
</tr>
<tr>
<td width="50%">
<h3>🎵 Whisper Transcription</h3>
<p>Have whisper transcribe your speech in real time!</p>
<video width="100%" src="https://github.com/user-attachments/assets/87603053-acdc-4c8a-810f-f618c49caafb" controls></video>
<p>
<a href="https://huggingface.co/spaces/fastrtc/whisper-realtime">Demo</a> |
<a href="https://huggingface.co/spaces/fastrtc/whisper-realtime/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>📷 Yolov10 Object Detection</h3>
<p>Run the Yolov10 model on a user webcam stream in real time!</p>
<video width="100%" src="https://github.com/user-attachments/assets/f82feb74-a071-4e81-9110-a01989447ceb" controls></video>
<p>
<a href="https://huggingface.co/spaces/fastrtc/object-detection">Demo</a> |
<a href="https://huggingface.co/spaces/fastrtc/object-detection/blob/main/app.py">Code</a>
</p>
</td>
</tr>
<tr>
<td width="50%">
<h3>🗣️ Kyutai Moshi</h3>
<p>Kyutai's moshi is a novel speech-to-speech model for modeling human conversations.</p>
<video width="100%" src="https://github.com/user-attachments/assets/becc7a13-9e89-4a19-9df2-5fb1467a0137" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/talk-to-moshi/blob/main/app.py">Code</a>
</p>
</td>
<td width="50%">
<h3>🗣️ Hello Llama: Stop Word Detection</h3>
<p>A code editor built with Llama 3.3 70b that is triggered by the phrase "Hello Llama". Build a Siri-like coding assistant in 100 lines of code!</p>
<video width="100%" src="https://github.com/user-attachments/assets/3e10cb15-ff1b-4b17-b141-ff0ad852e613" controls></video>
<p>
<a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor">Demo</a> |
<a href="https://huggingface.co/spaces/freddyaboulton/hey-llama-code-editor/blob/main/app.py">Code</a>
</p>
</td>
</tr>
</table>
## Usage
This is an shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
- `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/).
- `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.
- `.mount(app)`: Mount the stream on a [FastAPI](https://fastapi.tiangolo.com/) app. Perfect for integrating with your already existing production system.
## Quickstart
### Echo Audio
使用时需要一个 handler 作为组件的入参,并实现类似以下代码:
```python
from fastrtc import Stream, ReplyOnPause
import numpy as np
import asyncio
import base64
from io import BytesIO
def echo(audio: tuple[int, np.ndarray]):
# The function will be passed the audio until the user pauses
# Implement any iterator that yields audio
# See "LLM Voice Chat" for a more complete example
yield audio
stream = Stream(
handler=ReplyOnPause(echo),
modality="audio",
mode="send-receive",
)
```
### LLM Voice Chat
```py
from fastrtc import (
ReplyOnPause, AdditionalOutputs, Stream,
audio_to_bytes, aggregate_bytes_to_16bit
)
import gradio as gr
from groq import Groq
import anthropic
from elevenlabs import ElevenLabs
groq_client = Groq()
claude_client = anthropic.Anthropic()
tts_client = ElevenLabs()
# See "Talk to Claude" in Cookbook for an example of how to keep
# track of the chat history.
def response(
audio: tuple[int, np.ndarray],
):
prompt = groq_client.audio.transcriptions.create(
file=("audio-file.mp3", audio_to_bytes(audio)),
model="whisper-large-v3-turbo",
response_format="verbose_json",
).text
response = claude_client.messages.create(
model="claude-3-5-haiku-20241022",
max_tokens=512,
messages=[{"role": "user", "content": prompt}],
)
response_text = " ".join(
block.text
for block in response.content
if getattr(block, "type", None) == "text"
)
iterator = tts_client.text_to_speech.convert_as_stream(
text=response_text,
voice_id="JBFqnCBsd6RMkjVDRZzb",
model_id="eleven_multilingual_v2",
output_format="pcm_24000"
)
for chunk in aggregate_bytes_to_16bit(iterator):
audio_array = np.frombuffer(chunk, dtype=np.int16).reshape(1, -1)
yield (24000, audio_array)
stream = Stream(
modality="audio",
mode="send-receive",
handler=ReplyOnPause(response),
)
```
### Webcam Stream
```python
from fastrtc import Stream
import numpy as np
def flip_vertically(image):
return np.flip(image, axis=0)
stream = Stream(
handler=flip_vertically,
modality="video",
mode="send-receive",
from gradio_webrtc import (
AsyncAudioVideoStreamHandler,
WebRTC,
VideoEmitType,
AudioEmitType,
)
from PIL import Image
def encode_audio(data: np.ndarray) -> dict:
"""Encode Audio data to send to the server"""
return {"mime_type": "audio/pcm", "data": base64.b64encode(data.tobytes()).decode("UTF-8")}
def encode_image(data: np.ndarray) -> dict:
with BytesIO() as output_bytes:
pil_image = Image.fromarray(data)
pil_image.save(output_bytes, "JPEG")
bytes_data = output_bytes.getvalue()
base64_str = str(base64.b64encode(bytes_data), "utf-8")
return {"mime_type": "image/jpeg", "data": base64_str}
class VideoChatHandler(AsyncAudioVideoStreamHandler):
def __init__(
self, expected_layout="mono", output_sample_rate=24000, output_frame_size=480
) -> None:
super().__init__(
expected_layout,
output_sample_rate,
output_frame_size,
input_sample_rate=24000,
)
self.audio_queue = asyncio.Queue()
self.video_queue = asyncio.Queue()
self.quit = asyncio.Event()
self.session = None
self.last_frame_time = 0
def copy(self) -> "VideoChatHandler":
return VideoChatHandler(
expected_layout=self.expected_layout,
output_sample_rate=self.output_sample_rate,
output_frame_size=self.output_frame_size,
)
#处理客户端上传的视频数据
async def video_receive(self, frame: np.ndarray):
newFrame = np.array(frame)
newFrame[0:, :, 0] = 255 - newFrame[0:, :, 0]
self.video_queue.put_nowait(newFrame)
#准备服务端下发的视频数据
async def video_emit(self) -> VideoEmitType:
return await self.video_queue.get()
#处理客户端上传的音频数据
async def receive(self, frame: tuple[int, np.ndarray]) -> None:
frame_size, array = frame
self.audio_queue.put_nowait(array)
#准备服务端下发的音频数据
async def emit(self) -> AudioEmitType:
if not self.args_set.is_set():
await self.wait_for_args()
array = await self.audio_queue.get()
return (self.output_sample_rate, array)
def shutdown(self) -> None:
self.quit.set()
self.connection = None
self.args_set.clear()
self.quit.clear()
css = """
footer {
display: none !important;
}
"""
with gr.Blocks(css=css) as demo:
webrtc = WebRTC(
label="Video Chat",
modality="audio-video",
mode="send-receive",
video_chat=True,
elem_id="video-source",
)
webrtc.stream(
VideoChatHandler(),
inputs=[webrtc],
outputs=[webrtc],
time_limit=150,
concurrency_limit=2,
)
if __name__ == "__main__":
demo.launch()
```
### Object Detection
## Deployment
在云环境中部署(例如 huggingfaceEC2 等)时,您需要设置转向服务器以中继 WEBRTC 流量。
最简单的方法是使用 Twilio 之类的服务。国内部署需要寻找适合的替代方案。
```python
from fastrtc import Stream
import gradio as gr
import cv2
from huggingface_hub import hf_hub_download
from .inference import YOLOv10
from twilio.rest import Client
import os
model_file = hf_hub_download(
repo_id="onnx-community/yolov10n", filename="onnx/model.onnx"
)
account_sid = os.environ.get("TWILIO_ACCOUNT_SID")
auth_token = os.environ.get("TWILIO_AUTH_TOKEN")
# git clone https://huggingface.co/spaces/fastrtc/object-detection
# for YOLOv10 implementation
model = YOLOv10(model_file)
client = Client(account_sid, auth_token)
def detection(image, conf_threshold=0.3):
image = cv2.resize(image, (model.input_width, model.input_height))
new_image = model.detect_objects(image, conf_threshold)
return cv2.resize(new_image, (500, 500))
token = client.tokens.create()
stream = Stream(
handler=detection,
modality="video",
mode="send-receive",
additional_inputs=[
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
]
)
rtc_configuration = {
"iceServers": token.ice_servers,
"iceTransportPolicy": "relay",
}
with gr.Blocks() as demo:
...
rtc = WebRTC(rtc_configuration=rtc_configuration, ...)
...
```
## Running the Stream
## Contributors
Run:
### Gradio
```py
stream.ui.launch()
```
### Telephone (Audio Only)
```py
stream.fastphone()
```
### FastAPI
```py
app = FastAPI()
stream.mount(app)
# Optional: Add routes
@app.get("/")
async def _():
return HTMLResponse(content=open("index.html").read())
# uvicorn app:app --host 0.0.0.0 --port 8000
```
[csxh47](https://github.com/xhup)
[bingochaos](https://github.com/bingochaos)
[sudowind](https://github.com/sudowind)
[emililykimura](https://github.com/emililykimura)
[Tony](https://github.com/raidios)
[Cheng Gang](https://github.com/lovepope)