mirror of
https://github.com/HumanAIGC-Engineering/gradio-webrtc.git
synced 2026-02-05 01:49:23 +08:00
* Add code * add code * add code * Rename messages * rename * add code * Add demo * docs + demos + bug fixes * add code * styles * user guide * Styles * Add code * misc docs updates * print nit * whisper + pr * url for images * whsiper update * Fix bugs * remove demo files * version number * Fix pypi readme * Fix * demos * Add llama code editor * Update llama code editor and object detection cookbook * Add more cookbook demos * add code * Fix links for PR deploys * add code * Fix the install * add tts * TTS docs * Typo * Pending bubbles for reply on pause * Stream redesign (#63) * better error handling * Websocket error handling * add code --------- Co-authored-by: Freddy Boulton <freddyboulton@hf-freddy.local> * remove docs from dist * Some docs typos * more typos * upload changes + docs * docs * better phone * update docs * add code * Make demos better * fix docs + websocket start_up * remove mention of FastAPI app * fastphone tweaks * add code * ReplyOnStopWord fixes * Fix cookbook * Fix pypi readme * add code * bump versions * sambanova cookbook * Fix tags * Llm voice chat * kyutai tag * Add error message to all index.html * STT module uses Moonshine * Not required from typing extensions * fix llm voice chat * Add vpn warning * demo fixes * demos * Add more ui args and gemini audio-video * update cookbook * version 9 --------- Co-authored-by: Freddy Boulton <freddyboulton@hf-freddy.local>
58 lines
1.8 KiB
Markdown
58 lines
1.8 KiB
Markdown
# Video Streaming
|
|
|
|
## Input/Output Streaming
|
|
|
|
We already saw this example in the [Quickstart](../../#quickstart) and the [Core Concepts](../streams) section.
|
|
|
|
=== "Code"
|
|
|
|
``` py title="Input/Output Streaming"
|
|
from fastrtc import Stream
|
|
import gradio as gr
|
|
|
|
def detection(image, conf_threshold=0.3): # (1)
|
|
processed_frame = process_frame(image, conf_threshold)
|
|
return processed_frame # (2)
|
|
|
|
stream = Stream(
|
|
handler=detection,
|
|
modality="video",
|
|
mode="send-receive", # (3)
|
|
additional_inputs=[
|
|
gr.Slider(minimum=0, maximum=1, step=0.01, value=0.3)
|
|
],
|
|
)
|
|
```
|
|
|
|
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
|
|
2. The function must return a numpy array. It can take arbitrary values from other components.
|
|
3. Set the `modality="video"` and `mode="send-receive"`
|
|
=== "Notes"
|
|
1. The webcam frame will be represented as a numpy array of shape (height, width, RGB).
|
|
2. The function must return a numpy array. It can take arbitrary values from other components.
|
|
3. Set the `modality="video"` and `mode="send-receive"`
|
|
|
|
## Server-to-Client Only
|
|
|
|
In this case, we stream from the server to the client so we will write a generator function that yields the next frame from the video (as a numpy array)
|
|
and set the `mode="receive"` in the `WebRTC` component.
|
|
|
|
=== "Code"
|
|
``` py title="Server-To-Client"
|
|
from fastrtc import Stream
|
|
|
|
def generation():
|
|
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"
|
|
cap = cv2.VideoCapture(url)
|
|
iterating = True
|
|
while iterating:
|
|
iterating, frame = cap.read()
|
|
yield frame
|
|
|
|
stream = Stream(
|
|
handler=generation,
|
|
modality="video",
|
|
mode="receive"
|
|
)
|
|
```
|