Merge remote-tracking branch 'origin/main' into open-avatar-chat-0.4.0

This commit is contained in:
bingochaos
2025-06-17 20:39:40 +08:00
142 changed files with 117010 additions and 814 deletions

View File

@@ -108,7 +108,7 @@ stream = Stream(
## Audio Icon
You can display an icon of your choice instead of the default wave animation for audio streaming.
Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or recevied (depending on the `mode` parameter) a pulse animation will emanate from the button.
Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or received (depending on the `mode` parameter) a pulse animation will emanate from the button.
You can control the button color and pulse color with `icon_button_color` and `pulse_color` parameters. They can take any valid css color.

View File

@@ -23,6 +23,7 @@ A collection of applications built with FastRTC. Click on the tags below to find
<button class="tag-button" data-tag="audio"><code>Audio</code></button>
<button class="tag-button" data-tag="video"><code>Video</code></button>
<button class="tag-button" data-tag="llm"><code>LLM</code></button>
<button class="tag-button" data-tag="text"><code>Text</code></button>
<button class="tag-button" data-tag="computer-vision"><code>Computer Vision</code></button>
<button class="tag-button" data-tag="real-time-api"><code>Real-time API</code></button>
<button class="tag-button" data-tag="voice-chat"><code>Voice Chat</code></button>
@@ -61,6 +62,32 @@ document.querySelectorAll('.tag-button').forEach(button => {
<div class="grid cards" markdown>
- :speaking_head:{ .lg .middle }:llama:{ .lg .middle } __Talk to Llama 4__
{: data-tags="audio,llm,voice-chat"}
---
Talk to Llama 4 using Groq + Cloudflare.
<video width=98% src="https://github.com/user-attachments/assets/f6d09e47-5e40-4296-b6cd-11d7f68baee2" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/talk-to-llama4)
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/talk-to-llama4/blob/main/app.py)
- :speaking_head:{ .lg .middle }:llama:{ .lg .middle } __Integrated Textbox__
{: data-tags="audio,llm,text,voice-chat"}
---
Talk or type to any LLM with FastRTC's integrated audio + text textbox.
<video width=98% src="https://github.com/user-attachments/assets/35c982a1-4a58-4947-af89-7ff287070ef5" controls style="text-align: center"></video>
[:octicons-arrow-right-24: Demo](https://huggingface.co/spaces/fastrtc/integrated-textbox)
[:octicons-code-16: Code](https://huggingface.co/spaces/fastrtc/integrated-textbox/blob/main/app.py)
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __Gemini Audio Video Chat__
{: data-tags="audio,video,real-time-api"}

View File

@@ -3,27 +3,78 @@ When deploying in cloud environments with firewalls (like Hugging Face Spaces, R
!!! tip
The `rtc_configuration` parameter of the `Stream` class also be passed to the [`WebRTC`](../userguide/gradio) component directly if you're building a standalone gradio app.
## Community Server
Hugging Face graciously provides a TURN server for the community.
## Cloudflare Calls API
Cloudflare also offers a managed TURN server with [Cloudflare Calls](https://www.cloudflare.com/en-au/developer-platform/products/cloudflare-calls/).
### With a Hugging Face Token
Cloudflare and Hugging Face have partnered to allow you to stream 10gb of WebRTC traffic per month for free with a Hugging Face account!
```python
from fastrtc import Stream, get_cloudflare_turn_credentials_async, get_cloudflare_turn_credentials
# Make sure the HF_TOKEN environment variable is set
# Or pass in a callable with all arguments set
# make sure you don't commit your token to git!
TOKEN = "hf_..."
async def get_credentials():
return await get_cloudflare_turn_credentials_async(hf_token=TOKEN)
stream = Stream(
handler=...,
rtc_configuration=get_credentials,
server_rtc_configuration=get_cloudflare_turn_credentials(ttl=360_000)
modality="audio",
mode="send-receive",
)
```
!!! tip
Setting an rtc configuration in the server is recommended but not required. It's a good practice to set short lived credentials in the client (default `ttl` value of 10 minutes when calling `get_cloudflare_turn_credentials*`) but you can share the same credentials between server and client.
### With a Cloudflare API Token
Once you have exhausted your monthly quota, you can create a **free** Cloudflare account.
Create an [account](https://developers.cloudflare.com/fundamentals/setup/account/create-account/) and head to the [Calls section in your dashboard](https://dash.cloudflare.com/?to=/:account/calls).
Choose `Create -> TURN App`, give it a name (like `fastrtc-demo`), and then hit the Create button.
Take note of the Turn Token ID (often exported as `TURN_KEY_ID`) and API Token (exported as `TURN_KEY_API_TOKEN`).
You can then connect from the WebRTC component like so:
```python
from fastrtc import Stream, get_cloudflare_turn_credentials_async
# Make sure the TURN_KEY_ID and TURN_KEY_API_TOKEN environment variables are set
stream = Stream(
handler=...,
rtc_configuration=get_cloudflare_turn_credentials_async,
modality="audio",
mode="send-receive",
)
```
## Community Server (Deprecated)
Hugging Face graciously provides 10gb of TURN traffic through Cloudflare's global network.
In order to use it, you need to first create a Hugging Face account by going to [huggingface.co](https://huggingface.co/).
Then navigate to this [space](https://huggingface.co/spaces/fastrtc/turn-server-login) and follow the instructions on the page. You just have to click the "Log in" button and then the "Sign Up" button.
![turn_login](https://github.com/user-attachments/assets/cefa8dec-487e-47d8-bb96-1a14a701f6e5)
Then you can create an [access token](https://huggingface.co/docs/hub/en/security-tokens).
Then you can use the `get_hf_turn_credentials` helper to get your credentials:
```python
from fastrtc import get_hf_turn_credentials, Stream
# Pass a valid access token for your Hugging Face account
# or set the HF_TOKEN environment variable
credentials = get_hf_turn_credentials(token=None)
# Make sure the HF_TOKEN environment variable is set
Stream(
handler=...,
rtc_configuration=credentials,
rtc_configuration=get_hf_turn_credentials,
modality="audio",
mode="send-receive"
)
@@ -31,15 +82,14 @@ Stream(
!!! warning
This is a shared resource so we make no latency/availability guarantees.
For more robust options, see the Twilio, Cloudflare and self-hosting options below.
This function is now deprecated. Please use `get_cloudflare_turn_credentials` instead.
## Twilio API
An easy way to do this is to use a service like Twilio.
Create a **free** [account](https://login.twilio.com/u/signup) and the install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
Create a **free** [account](https://login.twilio.com/u/signup) and then install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
```python
from fastrtc import Stream
@@ -78,50 +128,6 @@ Stream(
rtc_configuration = get_twilio_turn_credentials()
```
## Cloudflare Calls API
Cloudflare also offers a managed TURN server with [Cloudflare Calls](https://www.cloudflare.com/en-au/developer-platform/products/cloudflare-calls/).
Create a **free** [account](https://developers.cloudflare.com/fundamentals/setup/account/create-account/) and head to the [Calls section in your dashboard](https://dash.cloudflare.com/?to=/:account/calls).
Choose `Create -> TURN App`, give it a name (like `fastrtc-demo`), and then hit the Create button.
Take note of the Turn Token ID (often exported as `TURN_KEY_ID`) and API Token (exported as `TURN_KEY_API_TOKEN`).
You can then connect from the WebRTC component like so:
```python
from fastrtc import Stream
import requests
import os
turn_key_id = os.environ.get("TURN_KEY_ID")
turn_key_api_token = os.environ.get("TURN_KEY_API_TOKEN")
ttl = 86400 # Can modify TTL, here it's set to 24 hours
response = requests.post(
f"https://rtc.live.cloudflare.com/v1/turn/keys/{turn_key_id}/credentials/generate-ice-servers",
headers={
"Authorization": f"Bearer {turn_key_api_token}",
"Content-Type": "application/json",
},
json={"ttl": ttl},
)
if response.ok:
rtc_configuration = response.json()
else:
raise Exception(
f"Failed to get TURN credentials: {response.status_code} {response.text}"
)
stream = Stream(
handler=...,
rtc_configuration=rtc_configuration,
modality="audio",
mode="send-receive",
)
```
## Self Hosting
We have developed a script that can automatically deploy a TURN server to Amazon Web Services (AWS). You can follow the instructions [here](https://github.com/freddyaboulton/turn-server-deploy) or this guide.

View File

@@ -8,7 +8,7 @@
<div style="display: flex; flex-direction: row; justify-content: center">
<img style="display: block; padding-right: 5px; height: 20px;" alt="Static Badge" src="https://img.shields.io/pypi/v/fastrtc">
<a href="https://github.com/freddyaboulton/fastrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
<a href="https://github.com/gradio-app/fastrtc" target="_blank"><img alt="Static Badge" src="https://img.shields.io/badge/github-white?logo=github&logoColor=black"></a>
</div>
<h3 style='text-align: center'>
@@ -184,7 +184,7 @@ Learn more about the [Stream](userguide/streams) in the user guide.
## Examples
See the [cookbook](/cookbook).
Follow and join or [organization](https://huggingface.co/fastrtc) on Hugging Face!
Follow and join our [organization](https://huggingface.co/fastrtc) on Hugging Face!
<div style="display: flex; flex-direction: row; justify-content: center; align-items: center; max-width: 600px; margin: 0 auto;">
<img style="display: block; height: 100px; margin-right: 20px;" src="/hf-logo-with-title.svg">

View File

@@ -0,0 +1,267 @@
# TURN Credential Utils
## `get_turn_credentials_async`
```python
async def get_turn_credentials_async(
method: Literal["hf", "twilio", "cloudflare"] = "cloudflare",
**kwargs
):
```
Retrieves TURN credentials from the specified provider.
This can be passed directly to the Stream class and it will be called for each
unique WebRTC connection via the Gradio UI. When mounting to FastAPI, call this function
yourself to return the credentials to the frontend client, for example, in the
index route, you can call this function and embed the credentials in the source code of the index.html.
See the FastRTC spaces at hf.co/fastrtc for an example.
Acts as a dispatcher function to call the appropriate credential retrieval
function based on the method specified.
Args:
```
method: Literal["hf", "twilio", "cloudflare"] | None
The provider to use. 'hf' uses the deprecated Hugging Face endpoint.
'cloudflare' uses either Cloudflare keys or the HF endpoint.
'twilio' uses the Twilio API. Defaults to "cloudflare".
**kwargs:
Additional keyword arguments passed directly to the underlying
provider-specific function (e.g., `token`, `ttl` for 'hf';
`twilio_sid`, `twilio_token` for 'twilio'; `turn_key_id`,
`turn_key_api_token`, `hf_token`, `ttl` for 'cloudflare').
```
Returns:
```
dict:
A dictionary containing the TURN credentials from the chosen provider.
```
Raises:
```
ValueError:
If an invalid method is specified.
Also raises exceptions from the underlying provider functions (see their
docstrings).
```
Example
```python
>>> from fastrtc import get_turn_credentials_async, Stream
>>> credentials = await get_turn_credentials_async()
>>> print(credentials)
>>> # Can pass directly to stream class
>>> stream = Stream(..., rtc_configuration=get_turn_credentials_async)
```
## `get_turn_credentials`
```python
def get_turn_credentials(
method: Literal["hf", "twilio", "cloudflare"] = "cloudflare",
**kwargs
):
```
Retrieves TURN credentials from the specified provider.
This can be passed directly to the Stream class and it will be called for each
unique WebRTC connection via the Gradio UI. When mounting to FastAPI, call this function
yourself to return the credentials to the frontend client, for example, in the
index route, you can call this function and embed the credentials in the source code of the index.html.
See the FastRTC spaces at hf.co/fastrtc for an example.
Acts as a dispatcher function to call the appropriate credential retrieval
function based on the method specified.
Args:
```
method: Literal["hf", "twilio", "cloudflare"] | None
The provider to use. 'hf' uses the deprecated Hugging Face endpoint.
'cloudflare' uses either Cloudflare keys or the HF endpoint.
'twilio' uses the Twilio API. Defaults to "cloudflare".
**kwargs:
Additional keyword arguments passed directly to the underlying
provider-specific function (e.g., `token`, `ttl` for 'hf';
`twilio_sid`, `twilio_token` for 'twilio'; `turn_key_id`,
`turn_key_api_token`, `hf_token`, `ttl` for 'cloudflare').
```
Returns:
```
dict:
A dictionary containing the TURN credentials from the chosen provider.
```
Raises:
```
ValueError:
If an invalid method is specified.
Also raises exceptions from the underlying provider functions (see their
docstrings).
```
Example
```python
>>> from fastrtc import get_turn_credentials, Stream
>>> credentials = get_turn_credentials()
>>> print(credentials)
>>> # Can pass directly to stream class
>>> stream = Stream(..., rtc_configuration=get_turn_credentials_async)
```
## `get_cloudflare_turn_credentials_async`
```python
async def get_cloudflare_turn_credentials_async(
turn_key_id=None,
turn_key_api_token=None,
hf_token=None,
ttl=600,
client: httpx.AsyncClient | None = None,
):
```
Asynchronously retrieves TURN credentials from Cloudflare or Hugging Face.
Asynchronously fetches TURN server credentials either directly from Cloudflare
using API keys or via the Hugging Face TURN endpoint using an HF token. The HF
token method takes precedence if provided.
Args:
```
turn_key_id (str, optional):
Cloudflare TURN key ID. Defaults to None,
in which case the CLOUDFLARE_TURN_KEY_ID environment variable is used.
turn_key_api_token (str, optional):
Cloudflare TURN key API token.
Defaults to None, in which case the CLOUDFLARE_TURN_KEY_API_TOKEN
environment variable is used.
hf_token (str, optional):
Hugging Face API token. If provided, this method
is used instead of Cloudflare keys.
Defaults to None, in which case the HF_TOKEN environment variable is used.
ttl (int, optional): Time-to-live for the credentials in seconds.
Defaults to 600.
client (httpx.AsyncClient | None, optional): An existing httpx async client
to use for the request. If None, a new client is created per request.
Defaults to None.
```
Returns:
```
dict: A dictionary containing the TURN credentials (ICE servers).
```
Raises:
```
ValueError: If neither HF token nor Cloudflare keys (either as arguments
or environment variables) are provided.
Exception: If the request to the credential server fails.
```
Example
```python
>>> from fastrtc import get_cloudflare_turn_crendials_async, Stream
>>> credentials = await get_cloudflare_turn_credentials_async()
>>> print(credentials)
>>> # Can pass directly to stream class
>>> stream = Stream(..., rtc_configuration=get_turn_credentials_async)
```
## `get_cloudflare_turn_credentials`
```python
def get_cloudflare_turn_credentials(
turn_key_id=None,
turn_key_api_token=None,
hf_token=None,
ttl=600,
client: httpx.AsyncClient | None = None,
):
```
Retrieves TURN credentials from Cloudflare or Hugging Face.
Fetches TURN server credentials either directly from Cloudflare using API keys
or via the Hugging Face TURN endpoint using an HF token. The HF token method
takes precedence if provided.
Args:
```
turn_key_id (str, optional):
Cloudflare TURN key ID. Defaults to None,
in which case the CLOUDFLARE_TURN_KEY_ID environment variable is used.
turn_key_api_token (str, optional):
Cloudflare TURN key API token.
Defaults to None, in which case the CLOUDFLARE_TURN_KEY_API_TOKEN
environment variable is used.
hf_token (str, optional):
Hugging Face API token. If provided, this method
is used instead of Cloudflare keys.
Defaults to None, in which case the HF_TOKEN environment variable is used.
ttl (int, optional): Time-to-live for the credentials in seconds.
Defaults to 600.
client (httpx.AsyncClient | None, optional): An existing httpx async client
to use for the request. If None, a new client is created per request.
Defaults to None.
```
Returns:
```
dict: A dictionary containing the TURN credentials (ICE servers).
```
Raises:
```
ValueError: If neither HF token nor Cloudflare keys (either as arguments
or environment variables) are provided.
Exception: If the request to the credential server fails.
```
Example
```python
>>> from fastrtc import get_cloudflare_turn_crendials_async, Stream
>>> credentials = await get_cloudflare_turn_credentials_async()
>>> print(credentials)
>>> # Can pass directly to stream class
>>> stream = Stream(..., rtc_configuration=get_turn_credentials_async)
```
## `get_twilio_turn_credentials`
```python
def get_twilio_turn_credentials(
twilio_sid=None,
twilio_token=None):
```
Retrieves TURN credentials from Twilio.
Uses the Twilio REST API to generate temporary TURN credentials. Requires
the `twilio` package to be installed.
Args:
```
twilio_sid (str, optional):
Twilio Account SID. Defaults to None, in which
case the TWILIO_ACCOUNT_SID environment variable is used.
twilio_token (str, optional):
Twilio Auth Token. Defaults to None, in which
case the TWILIO_AUTH_TOKEN environment variable is used.
```
Returns:
```
dict:
A dictionary containing the TURN credentials formatted for WebRTC,
including 'iceServers' and 'iceTransportPolicy'.
```
Raises:
```
ImportError: If the `twilio` package is not installed.
ValueError: If Twilio credentials (SID and token) are not provided either
as arguments or environment variables.
TwilioRestException: If the Twilio API request fails.
```

View File

@@ -0,0 +1,326 @@
## `ReplyOnPause` Class
```python
ReplyOnPause(
fn: ReplyFnGenerator,
startup_fn: Callable | None = None,
algo_options: AlgoOptions | None = None,
model_options: ModelOptions | None = None,
can_interrupt: bool = True,
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
model: PauseDetectionModel | None = None,
)
```
A stream handler that processes incoming audio, detects pauses, and triggers a reply function (`fn`) when a pause is detected.
This handler accumulates audio chunks, uses a Voice Activity Detection (VAD) model to determine speech segments, and identifies pauses based on configurable thresholds. Once a pause is detected after speech has started, it calls the provided generator function `fn` with the accumulated audio.
It can optionally run a `startup_fn` at the beginning and supports interruption of the reply function if new audio arrives.
### Methods
#### `__init__`
```python
__init__(
fn: ReplyFnGenerator,
startup_fn: Callable | None = None,
algo_options: AlgoOptions | None = None,
model_options: ModelOptions | None = None,
can_interrupt: bool = True,
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
model: PauseDetectionModel | None = None,
)
```
Initializes the ReplyOnPause handler.
**Args:**
| Name | Type | Description |
| :------------------- | :---------------------------------- | :------------------------------------------------------------------------------------------------------- |
| `fn` | `ReplyFnGenerator` | The generator function to execute upon pause detection. It receives `(sample_rate, audio_array)` and optionally `*args`. |
| `startup_fn` | `Callable | None` | An optional function to run once at the beginning. |
| `algo_options` | `AlgoOptions | None` | Options for the pause detection algorithm. |
| `model_options` | `ModelOptions | None` | Options for the VAD model. |
| `can_interrupt` | `bool` | If True, incoming audio during `fn` execution will stop the generator and process the new audio. |
| `expected_layout` | `Literal["mono", "stereo"]` | Expected input audio layout ('mono' or 'stereo'). |
| `output_sample_rate` | `int` | The sample rate expected for audio yielded by `fn`. |
| `output_frame_size` | `int | None` | Deprecated. |
| `input_sample_rate` | `int` | The expected sample rate of incoming audio. |
| `model` | `PauseDetectionModel | None` | An optional pre-initialized VAD model instance. |
---
#### `receive`
```python
receive(frame: tuple[int, np.ndarray]) -> None
```
Receives an audio frame from the stream. Processes the audio frame using `process_audio`. If a pause is detected, it sets the event. If interruption is enabled and a reply is ongoing, it closes the current generator and clears the processing queue.
**Args:**
| Name | Type | Description |
| :------ | :--------------------- | :---------------------------------------------------------------- |
| `frame` | `tuple[int, np.ndarray]` | A tuple containing the sample rate and the audio frame data. |
---
#### `emit`
```python
emit() -> EmitType | None
```
Produces the next output chunk from the reply generator (`fn`).
This method is called repeatedly after a pause is detected (event is set). If the generator is not already running, it initializes it by calling `fn` with the accumulated audio and any required additional arguments. It then yields the next item from the generator. Handles both sync and async generators. Resets the state upon generator completion or error.
**Returns:**
| Type | Description |
| :--------------- | :------------------------------------------------------------------------------- |
| `EmitType | None` | The next output item from the generator, or None if no pause event has occurred or the generator is exhausted. |
**Raises:**
* **`Exception`**: Re-raises exceptions occurring within the `fn` generator.
---
#### `start_up`
```python
start_up()
```
Executes the startup function `startup_fn` if provided. Waits for additional arguments if needed before calling `startup_fn`.
---
#### `copy`
```python
copy() -> ReplyOnPause
```
Creates a new instance of ReplyOnPause with the same configuration.
**Returns:**
| Type | Description |
| :------------- | :---------------------------------------------------- |
| `ReplyOnPause` | A new `ReplyOnPause` instance with identical settings. |
---
#### `determine_pause`
```python
determine_pause(audio: np.ndarray, sampling_rate: int, state: AppState) -> bool
```
Analyzes an audio chunk to detect if a significant pause occurred after speech.
Uses the VAD model to measure speech duration within the chunk. Updates the application state (`state`) regarding whether talking has started and accumulates speech segments.
**Args:**
| Name | Type | Description |
| :-------------- | :----------- | :-------------------------------------- |
| `audio` | `np.ndarray` | The numpy array containing the audio chunk. |
| `sampling_rate` | `int` | The sample rate of the audio chunk. |
| `state` | `AppState` | The current application state. |
**Returns:**
| Type | Description |
| :----- | :------------------------------------------------------------------------------------------------------ |
| `bool` | True if a pause satisfying the configured thresholds is detected after speech has started, False otherwise. |
---
#### `process_audio`
```python
process_audio(audio: tuple[int, np.ndarray], state: AppState) -> None
```
Processes an incoming audio frame. Appends the frame to the buffer, runs pause detection on the buffer, and updates the application state.
**Args:**
| Name | Type | Description |
| :------ | :--------------------- | :---------------------------------------------------------------- |
| `audio` | `tuple[int, np.ndarray]` | A tuple containing the sample rate and the audio frame data. |
| `state` | `AppState` | The current application state to update. |
---
#### `reset`
```python
reset()
```
Resets the handler state to its initial condition. Clears accumulated audio, resets state flags, closes any active generator, and clears the event flag.
---
#### `trigger_response`
```python
trigger_response()
```
Manually triggers the response generation process. Sets the event flag, effectively simulating a pause detection. Initializes the stream buffer if it's empty.
---
## `ReplyOnStopWords` Class
```python
ReplyOnStopWords(
fn: ReplyFnGenerator,
stop_words: list[str],
startup_fn: Callable | None = None,
algo_options: AlgoOptions | None = None,
model_options: ModelOptions | None = None,
can_interrupt: bool = True,
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
model: PauseDetectionModel | None = None,
)
```
A stream handler that extends `ReplyOnPause` to trigger based on stop words followed by a pause.
This handler listens to the incoming audio stream, performs Speech-to-Text (STT) to detect predefined stop words. Once a stop word is detected, it waits for a subsequent pause in speech (using the VAD model) before triggering the reply function (`fn`) with the audio recorded *after* the stop word.
### Methods
#### `__init__`
```python
__init__(
fn: ReplyFnGenerator,
stop_words: list[str],
startup_fn: Callable | None = None,
algo_options: AlgoOptions | None = None,
model_options: ModelOptions | None = None,
can_interrupt: bool = True,
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
model: PauseDetectionModel | None = None,
)
```
Initializes the ReplyOnStopWords handler.
**Args:**
*(Inherits Args from `ReplyOnPause.__init__`)*
| Name | Type | Description |
| :----------- | :---------- | :------------------------------------------------------------------------------------------------------- |
| `stop_words` | `list[str]` | A list of strings (words or phrases) to listen for. Detection is case-insensitive and ignores punctuation. |
---
#### `stop_word_detected`
```python
stop_word_detected(text: str) -> bool
```
Checks if any of the configured stop words are present in the text. Performs a case-insensitive search, treating multi-word stop phrases correctly and ignoring basic punctuation.
**Args:**
| Name | Type | Description |
| :----- | :----- | :----------------------------------- |
| `text` | `str` | The text transcribed from the audio. |
**Returns:**
| Type | Description |
| :----- | :---------------------------------------- |
| `bool` | True if a stop word is found, False otherwise. |
---
#### `send_stopword`
```python
send_stopword()
```
Sends a 'stopword' message asynchronously via the communication channel (if configured).
---
#### `determine_pause`
```python
determine_pause(audio: np.ndarray, sampling_rate: int, state: ReplyOnStopWordsState) -> bool
```
Analyzes an audio chunk to detect stop words and subsequent pauses. Overrides the `ReplyOnPause.determine_pause` method. First, it performs STT on the audio buffer to detect stop words. Once a stop word is detected (`state.stop_word_detected` is True), it then uses the VAD model to detect a pause in the audio *following* the stop word.
**Args:**
| Name | Type | Description |
| :-------------- | :---------------------- | :--------------------------------------------------- |
| `audio` | `np.ndarray` | The numpy array containing the audio chunk. |
| `sampling_rate` | `int` | The sample rate of the audio chunk. |
| `state` | `ReplyOnStopWordsState` | The current application state (ReplyOnStopWordsState). |
**Returns:**
| Type | Description |
| :----- | :------------------------------------------------------------------------------------------------------------------------------------- |
| `bool` | True if a stop word has been detected and a subsequent pause satisfying the configured thresholds is detected, False otherwise. |
---
#### `reset`
```python
reset()
```
Resets the handler state to its initial condition. Clears accumulated audio, resets state flags (including stop word state), closes any active generator, and clears the event flag.
---
#### `copy`
```python
copy() -> ReplyOnStopWords
```
Creates a new instance of ReplyOnStopWords with the same configuration.
**Returns:**
| Type | Description |
| :----------------- | :------------------------------------------------------- |
| `ReplyOnStopWords` | A new `ReplyOnStopWords` instance with identical settings. |
*(Inherits other public methods like `start_up`, `process_audio`, `receive`, `trigger_response`, `async_iterate`, `emit` from `ReplyOnPause`)*

196
docs/reference/stream.md Normal file
View File

@@ -0,0 +1,196 @@
# `Stream` Class
```python
Stream(
handler: HandlerType,
*,
additional_outputs_handler: Callable | None = None,
mode: Literal["send-receive", "receive", "send"] = "send-receive",
modality: Literal["video", "audio", "audio-video"] = "video",
concurrency_limit: int | None | Literal["default"] = "default",
time_limit: float | None = None,
allow_extra_tracks: bool = False,
rtp_params: dict[str, Any] | None = None,
rtc_configuration: dict[str, Any] | None = None,
track_constraints: dict[str, Any] | None = None,
additional_inputs: list[Component] | None = None,
additional_outputs: list[Component] | None = None,
ui_args: UIArgs | None = None
)
```
Define an audio or video stream with a built-in UI, mountable on a FastAPI app.
This class encapsulates the logic for handling real-time communication (WebRTC) streams, including setting up peer connections, managing tracks, generating a Gradio user interface, and integrating with FastAPI for API endpoints. It supports different modes (send, receive, send-receive) and modalities (audio, video, audio-video), and can optionally handle additional Gradio input/output components alongside the stream. It also provides functionality for telephone integration via the FastPhone service.
## Attributes
| Name | Type | Description |
| :----------------------------- | :-------------------------------------------- | :----------------------------------------------------------------------- |
| `mode` | `Literal["send-receive", "receive", "send"]` | The direction of the stream. |
| `modality` | `Literal["video", "audio", "audio-video"]` | The type of media stream. |
| `rtp_params` | `dict[str, Any] \| None` | Parameters for RTP encoding. |
| `event_handler` | `HandlerType` | The main function to process stream data. |
| `concurrency_limit` | `int` | The maximum number of concurrent connections allowed. |
| `time_limit` | `float \| None` | Time limit in seconds for the event handler execution. |
| `allow_extra_tracks` | `bool` | Whether to allow extra tracks beyond the specified modality. |
| `additional_output_components` | `list[Component] \| None` | Extra Gradio output components. |
| `additional_input_components` | `list[Component] \| None` | Extra Gradio input components. |
| `additional_outputs_handler` | `Callable \| None` | Handler for additional outputs. |
| `track_constraints` | `dict[str, Any] \| None` | Constraints for media tracks (e.g., resolution). |
| `webrtc_component` | `WebRTC` | The underlying Gradio WebRTC component instance. |
| `rtc_configuration` | `dict[str, Any] \| None \| Callable` | Configuration for the RTCPeerConnection (e.g., ICE servers). |
| `server_rtc_configuration` | `dict[str, Any] \| None` | Configuration for the RTCPeerConnection (e.g., ICE servers) to be used in the server |
| `_ui` | `Blocks` | The Gradio Blocks UI instance. |
## Methods
### `mount`
```python
mount(app: FastAPI, path: str = "")
```
Mount the stream's API endpoints onto a FastAPI application.
This method adds the necessary routes (`/webrtc/offer`, `/telephone/handler`, `/telephone/incoming`, `/websocket/offer`) to the provided FastAPI app, prefixed with the optional `path`. It also injects a startup message into the app's lifespan.
**Args:**
| Name | Type | Description |
| :----- | :-------- | :----------------------------------------------- |
| `app` | `FastAPI` | The FastAPI application instance. |
| `path` | `str` | An optional URL prefix for the mounted routes. |
---
### `fastphone`
```python
fastphone(
token: str | None = None,
host: str = "127.0.0.1",
port: int = 8000,
**kwargs
)
```
Launch the FastPhone service for telephone integration.
Starts a local FastAPI server, mounts the stream, creates a public tunnel (using Gradio's tunneling), registers the tunnel URL with the FastPhone backend service, and prints the assigned phone number and access code. This allows users to call the phone number and interact with the stream handler.
**Args:**
| Name | Type | Description |
| :------- | :-------------- | :--------------------------------------------------------------------------------------------------------- |
| `token` | `str \| None` | Optional Hugging Face Hub token for authentication with the FastPhone service. If None, attempts to find one automatically. |
| `host` | `str` | The local host address to bind the server to. |
| `port` | `int` | The local port to bind the server to. |
| `**kwargs` | | Additional keyword arguments passed to `uvicorn.run`. |
**Raises:**
* **`httpx.HTTPStatusError`**: If registration with the FastPhone service fails.
* **`RuntimeError`**: If running in Colab/Spaces without `rtc_configuration`.
### `offer`
```python
async offer(body: Body)
```
Handle an incoming WebRTC offer via HTTP POST.
Processes the SDP offer and ICE candidates from the client to establish a WebRTC connection.
**Args:**
| Name | Type | Description |
| :----- | :----- | :------------------------------------------------------------------------------------------------------ |
| `body` | `Body` | A Pydantic model containing the SDP offer, optional ICE candidate, type ('offer'), and a unique WebRTC ID. |
**Returns:**
* A dictionary containing the SDP answer generated by the server.
---
### `handle_incoming_call`
```python
async handle_incoming_call(request: Request)
```
Handle incoming telephone calls (e.g., via Twilio).
Generates TwiML instructions to connect the incoming call to the WebSocket handler (`/telephone/handler`) for audio streaming.
**Args:**
| Name | Type | Description |
| :-------- | :-------- | :----------------------------------------------------------- |
| `request` | `Request` | The FastAPI Request object for the incoming call webhook. |
**Returns:**
* An `HTMLResponse` containing the TwiML instructions as XML.
---
### `telephone_handler`
```python
async telephone_handler(websocket: WebSocket)
```
The websocket endpoint for streaming audio over Twilio phone.
**Args:**
| Name | Type | Description |
| :---------- | :---------- | :-------------------------------------- |
| `websocket` | `WebSocket` | The incoming WebSocket connection object. |
---
### `websocket_offer`
```python
async websocket_offer(websocket: WebSocket)
```
Establish a Websocket connection to the Stream..
**Args:**
| Name | Type | Description |
| :---------- | :---------- | :-------------------------------------- |
| `websocket` | `WebSocket` | The incoming WebSocket connection object. |
## Properties
### `ui`
```python
@property
ui() -> Blocks
```
Get the Gradio Blocks UI instance associated with this stream.
**Returns:**
* The `gradio.Blocks` UI instance.
```python
@ui.setter
ui(blocks: Blocks)
```
Set a custom Gradio Blocks UI for this stream.
**Args:**
| Name | Type | Description |
| :------- | :------- | :----------------------------------------------- |
| `blocks` | `Blocks` | The `gradio.Blocks` instance to use as the UI. |

View File

@@ -0,0 +1,420 @@
# Stream Handlers
These abstract base classes define the core interfaces for handling audio and video streams within FastRTC. Concrete handlers like `ReplyOnPause` inherit from these.
## `StreamHandlerBase` Class
```python
StreamHandlerBase(
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
)
```
Base class for handling media streams in FastRTC.
Provides common attributes and methods for managing stream state, communication channels, and basic configuration. This class is intended to be subclassed by concrete stream handlers like `StreamHandler` or `AsyncStreamHandler`.
### Attributes
| Name | Type | Description |
| :------------------- | :---------------------------- | :----------------------------------------------------------------------- |
| `expected_layout` | `Literal["mono", "stereo"]` | The expected channel layout of the input audio ('mono' or 'stereo'). |
| `output_sample_rate` | `int` | The target sample rate for the output audio. |
| `output_frame_size` | `int` | The desired number of samples per output audio frame. |
| `input_sample_rate` | `int` | The expected sample rate of the input audio. |
| `channel` | `DataChannel \| None` | The WebRTC data channel for communication. |
| `channel_set` | `asyncio.Event` | Event indicating if the data channel is set. |
| `args_set` | `asyncio.Event` | Event indicating if additional arguments are set. |
| `latest_args` | `str \| list[Any]` | Stores the latest arguments received. |
| `loop` | `asyncio.AbstractEventLoop` | The asyncio event loop. |
| `phone_mode` | `bool` | Flag indicating if operating in telephone mode. |
### Methods
#### `__init__`
```python
__init__(
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
)
```
Initializes the StreamHandlerBase.
**Args:**
| Name | Type | Description |
| :------------------- | :-------------------------- | :------------------------------------------------------------------- |
| `expected_layout` | `Literal["mono", "stereo"]` | Expected input audio layout ('mono' or 'stereo'). |
| `output_sample_rate` | `int` | Target output audio sample rate. |
| `output_frame_size` | `int \| None` | Deprecated. Frame size is now derived from sample rate. |
| `input_sample_rate` | `int` | Expected input audio sample rate. |
---
#### `clear_queue`
```python
clear_queue()
```
Clears the internal processing queue via the registered callback.
---
#### `send_message`
```python
async send_message(msg: str)
```
Asynchronously sends a message over the data channel.
**Args:**
| Name | Type | Description |
| :---- | :----- | :------------------------ |
| `msg` | `str` | The string message to send. |
---
#### `send_message_sync`
```python
send_message_sync(msg: str)
```
Synchronously sends a message over the data channel. Runs the async `send_message` in the event loop and waits for completion.
**Args:**
| Name | Type | Description |
| :---- | :----- | :------------------------ |
| `msg` | `str` | The string message to send. |
---
#### `reset`
```python
reset()
```
Resets the argument set event.
---
#### `shutdown`
```python
shutdown()
```
Placeholder for shutdown logic. Subclasses can override.
---
## `StreamHandler` Class
```python
StreamHandler(
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
)
```
Abstract base class for synchronous stream handlers.
Inherits from `StreamHandlerBase` and defines the core synchronous interface for processing audio streams. Subclasses must implement `receive`, `emit`, and `copy`.
*(Inherits Attributes and Methods from `StreamHandlerBase`)*
### Abstract Methods
#### `receive`
```python
@abstractmethod
receive(frame: tuple[int, npt.NDArray[np.int16]]) -> None
```
Process an incoming audio frame synchronously.
**Args:**
| Name | Type | Description |
| :------ | :------------------------------------ | :----------------------------------------------------------------------- |
| `frame` | `tuple[int, npt.NDArray[np.int16]]` | A tuple containing the sample rate (int) and the audio data as a numpy array (int16). |
---
#### `emit`
```python
@abstractmethod
emit() -> EmitType
```
Produce the next output chunk synchronously. This method is called repeatedly to generate the output to be sent back over the stream.
**Returns:**
| Type | Description |
| :--------- | :------------------------------------------------------------------------------------------------------------------------------------- |
| `EmitType` | An output item conforming to `EmitType`, which could be audio data, additional outputs, control signals (like `CloseStream`), or None. |
---
#### `copy`
```python
@abstractmethod
copy() -> StreamHandler
```
Create a copy of this synchronous stream handler instance. Used to create a new handler for each connection.
**Returns:**
| Type | Description |
| :-------------- | :----------------------------------------------------------- |
| `StreamHandler` | A new instance of the concrete StreamHandler subclass. |
---
#### `start_up`
```python
start_up()
```
Optional synchronous startup logic.
---
## `AsyncStreamHandler` Class
```python
AsyncStreamHandler(
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
)
```
Abstract base class for asynchronous stream handlers.
Inherits from `StreamHandlerBase` and defines the core asynchronous interface using coroutines (`async def`) for processing audio streams. Subclasses must implement `receive`, `emit`, and `copy`. The `start_up` method must also be a coroutine.
*(Inherits Attributes and Methods from `StreamHandlerBase`)*
### Abstract Methods
#### `receive`
```python
@abstractmethod
async receive(frame: tuple[int, npt.NDArray[np.int16]]) -> None
```
Process an incoming audio frame asynchronously.
**Args:**
| Name | Type | Description |
| :------ | :------------------------------------ | :----------------------------------------------------------------------- |
| `frame` | `tuple[int, npt.NDArray[np.int16]]` | A tuple containing the sample rate (int) and the audio data as a numpy array (int16). |
---
#### `emit`
```python
@abstractmethod
async emit() -> EmitType
```
Produce the next output chunk asynchronously. This coroutine is called to generate the output to be sent back over the stream.
**Returns:**
| Type | Description |
| :--------- | :------------------------------------------------------------------------------------------------------------------------------------- |
| `EmitType` | An output item conforming to `EmitType`, which could be audio data, additional outputs, control signals (like `CloseStream`), or None. |
---
#### `copy`
```python
@abstractmethod
copy() -> AsyncStreamHandler
```
Create a copy of this asynchronous stream handler instance. Used to create a new handler for each connection.
**Returns:**
| Type | Description |
| :------------------- | :--------------------------------------------------------------- |
| `AsyncStreamHandler` | A new instance of the concrete AsyncStreamHandler subclass. |
---
#### `start_up`
```python
async start_up()
```
Optional asynchronous startup logic. Must be a coroutine (`async def`).
---
## `AudioVideoStreamHandler` Class
```python
AudioVideoStreamHandler(
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
)
```
Abstract base class for synchronous handlers processing both audio and video.
Inherits from `StreamHandler` (synchronous audio) and adds abstract methods for handling video frames synchronously. Subclasses must implement the audio methods (`receive`, `emit`) and the video methods (`video_receive`, `video_emit`), as well as `copy`.
*(Inherits Attributes and Methods from `StreamHandler`)*
### Abstract Methods
#### `video_receive`
```python
@abstractmethod
video_receive(frame: VideoFrame) -> None
```
Process an incoming video frame synchronously.
**Args:**
| Name | Type | Description |
| :------ | :----------- | :---------------------------- |
| `frame` | `VideoFrame` | The incoming aiortc `VideoFrame`. |
---
#### `video_emit`
```python
@abstractmethod
video_emit() -> VideoEmitType
```
Produce the next output video frame synchronously.
**Returns:**
| Type | Description |
| :-------------- | :------------------------------------------------------------------------------------------------------- |
| `VideoEmitType` | An output item conforming to `VideoEmitType`, typically a numpy array representing the video frame, or None. |
---
#### `copy`
```python
@abstractmethod
copy() -> AudioVideoStreamHandler
```
Create a copy of this audio-video stream handler instance.
**Returns:**
| Type | Description |
| :---------------------- | :------------------------------------------------------------------- |
| `AudioVideoStreamHandler` | A new instance of the concrete AudioVideoStreamHandler subclass. |
---
## `AsyncAudioVideoStreamHandler` Class
```python
AsyncAudioVideoStreamHandler(
expected_layout: Literal["mono", "stereo"] = "mono",
output_sample_rate: int = 24000,
output_frame_size: int | None = None, # Deprecated
input_sample_rate: int = 48000,
)
```
Abstract base class for asynchronous handlers processing both audio and video.
Inherits from `AsyncStreamHandler` (asynchronous audio) and adds abstract coroutines for handling video frames asynchronously. Subclasses must implement the async audio methods (`receive`, `emit`, `start_up`) and the async video methods (`video_receive`, `video_emit`), as well as `copy`.
*(Inherits Attributes and Methods from `AsyncStreamHandler`)*
### Abstract Methods
#### `video_receive`
```python
@abstractmethod
async video_receive(frame: npt.NDArray[np.float32]) -> None
```
Process an incoming video frame asynchronously.
**Args:**
| Name | Type | Description |
| :------ | :----------------------- | :------------------------------------------------------------------------------------------------------- |
| `frame` | `npt.NDArray[np.float32]` | The video frame data as a numpy array (float32). Note: The type hint differs from the synchronous version. |
---
#### `video_emit`
```python
@abstractmethod
async video_emit() -> VideoEmitType
```
Produce the next output video frame asynchronously.
**Returns:**
| Type | Description |
| :-------------- | :------------------------------------------------------------------------------------------------------- |
| `VideoEmitType` | An output item conforming to `VideoEmitType`, typically a numpy array representing the video frame, or None. |
---
#### `copy`
```python
@abstractmethod
copy() -> AsyncAudioVideoStreamHandler
```
Create a copy of this asynchronous audio-video stream handler instance.
**Returns:**
| Type | Description |
| :--------------------------- | :----------------------------------------------------------------------- |
| `AsyncAudioVideoStreamHandler` | A new instance of the concrete AsyncAudioVideoStreamHandler subclass. |

123
docs/reference/utils.md Normal file
View File

@@ -0,0 +1,123 @@
# Utils
## `audio_to_bytes`
Convert an audio tuple containing sample rate and numpy array data into bytes.
Useful for sending data to external APIs from `ReplyOnPause` handler.
Parameters
```
audio : tuple[int, np.ndarray]
A tuple containing:
- sample_rate (int): The audio sample rate in Hz
- data (np.ndarray): The audio data as a numpy array
```
Returns
```
bytes
The audio data encoded as bytes, suitable for transmission or storage
```
Example
```python
>>> sample_rate = 44100
>>> audio_data = np.array([0.1, -0.2, 0.3]) # Example audio samples
>>> audio_tuple = (sample_rate, audio_data)
>>> audio_bytes = audio_to_bytes(audio_tuple)
```
## `audio_to_file`
Save an audio tuple containing sample rate and numpy array data to a file.
Parameters
```
audio : tuple[int, np.ndarray]
A tuple containing:
- sample_rate (int): The audio sample rate in Hz
- data (np.ndarray): The audio data as a numpy array
```
Returns
```
str
The path to the saved audio file
```
Example
```
```python
>>> sample_rate = 44100
>>> audio_data = np.array([0.1, -0.2, 0.3]) # Example audio samples
>>> audio_tuple = (sample_rate, audio_data)
>>> file_path = audio_to_file(audio_tuple)
>>> print(f"Audio saved to: {file_path}")
```
## `aggregate_bytes_to_16bit`
Aggregate bytes to 16-bit audio samples.
This function takes an iterator of chunks and aggregates them into 16-bit audio samples.
It handles incomplete samples and combines them with the next chunk.
Parameters
```
chunks_iterator : Iterator[bytes]
An iterator of byte chunks to aggregate
```
Returns
```
Iterator[NDArray[np.int16]]
An iterator of 16-bit audio samples
```
Example
```python
>>> chunks_iterator = [b'\x00\x01', b'\x02\x03', b'\x04\x05']
>>> for chunk in aggregate_bytes_to_16bit(chunks_iterator):
>>> print(chunk)
```
## `async_aggregate_bytes_to_16bit`
Aggregate bytes to 16-bit audio samples asynchronously.
Parameters
```
chunks_iterator : Iterator[bytes]
An iterator of byte chunks to aggregate
```
Returns
```
Iterator[NDArray[np.int16]]
An iterator of 16-bit audio samples
```
Example
```python
>>> chunks_iterator = [b'\x00\x01', b'\x02\x03', b'\x04\x05']
>>> for chunk in async_aggregate_bytes_to_16bit(chunks_iterator):
>>> print(chunk)
```
## `wait_for_item`
Wait for an item from an asyncio.Queue with a timeout.
Parameters
```
queue : asyncio.Queue
The queue to wait for an item from
timeout : float
The timeout in seconds
```
Returns
```
Any
The item from the queue or None if the timeout is reached
```
Example
```python
>>> queue = asyncio.Queue()
>>> queue.put_nowait(1)
>>> item = await wait_for_item(queue)
>>> print(item)
```

View File

@@ -53,7 +53,7 @@ document.querySelectorAll('.tag-button').forEach(button => {
---
Description:
[Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wraped in a pypi package for plug and play!
[Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wrapped in a pypi package for plug and play!
Install Instructions
```python
@@ -82,6 +82,21 @@ document.querySelectorAll('.tag-button').forEach(button => {
[:octicons-code-16: Repository](https://github.com/sgarg26/fastrtc-kroko)
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } fastrtc-whisper-cpp
{: data-tags="whisper-cpp"}
---
Description:
[whisper.cpp](https://huggingface.co/ggerganov/whisper.cpp) is the ggml version of OpenAI's Whisper model.
Install Instructions
```python
pip install fastrtc-whisper-cpp
```
Check out the fastrtc-whisper-cpp docs for examples!
[:octicons-code-16: Repository](https://github.com/mahimairaja/fastrtc-whisper-cpp)
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __Your STT Model__
{: data-tags="pytorch"}
@@ -131,4 +146,4 @@ document.querySelectorAll('.tag-button').forEach(button => {
stream.ui.launch()
```
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.

View File

@@ -62,7 +62,7 @@ document.querySelectorAll('.tag-button').forEach(button => {
<video src="https://github.com/user-attachments/assets/54dfffc9-1981-4d12-b4d1-eb68ab27e5ad" controls style="text-align: center"></video>
[:octicons-code-16: Repository]([Orpheus.cpp](https://github.com/freddyaboulton/orpheus-cpp))
[:octicons-code-16: Repository](https://github.com/freddyaboulton/orpheus-cpp)
- :speaking_head:{ .lg .middle }:eyes:{ .lg .middle } __Your TTS Model__
{: data-tags="pytorch"}
@@ -125,4 +125,4 @@ document.querySelectorAll('.tag-button').forEach(button => {
stream.ui.launch()
```
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/text_to_speech_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so other can try it out easily.
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/text_to_speech_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.

View File

@@ -165,7 +165,7 @@ In this gallery, you can find a collection of turn-taking algorithms and VAD mod
stream.ui.launch()
```
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.
!!! tip "Package Naming Convention"
It is recommended to name your package `fastrtc-<package-name>` so developers can easily find it on [pypi](https://pypi.org/search/?q=fastrtc-).

View File

@@ -55,7 +55,7 @@ The `ReplyOnPause` handler can also send the following `log` messages.
```json
{
"type": "log",
"data": "pause_detected" | "response_starting"
"data": "pause_detected" | "response_starting" | "started_talking"
}
```
@@ -403,6 +403,9 @@ WebSocket connections are currently only supported for audio in send-receive mod
To connect to the server via WebSocket, you'll need to establish a WebSocket connection and handle audio processing. The code below assumes there is an HTML audio element for output playback.
The input audio must be mu-law encoded with a sample rate equal to the input_sample_rate of the handler you are connecting to. By default it is 48k Hz.
The out audio will also be mulaw encoded and the sample rate will be equal to the output_sample_rate of the handler. By default it is 48k Hz.
\`\`\`javascript
// Setup audio context and stream
const audioContext = new AudioContext();
@@ -441,6 +444,40 @@ ws.onopen = () => {
}
};
};
ws.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data?.type === "send_input") {
fetch('/input_hook', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
// Send additional input data here
body: JSON.stringify({ webrtc_id: wsId })
});
}
if (data.event === "media") {
// Process received audio
const audioData = atob(data.media.payload);
const mulawData = new Uint8Array(audioData.length);
for (let i = 0; i < audioData.length; i++) {
mulawData[i] = audioData.charCodeAt(i);
}
// Convert mu-law to linear PCM
const linearData = alawmulaw.mulaw.decode(mulawData);
// Create an AudioBuffer
const audioBuffer = outputContext.createBuffer(1, linearData.length, sampleRate);
const channelData = audioBuffer.getChannelData(0);
// Fill the buffer with the decoded data
for (let i = 0; i < linearData.length; i++) {
channelData[i] = linearData[i] / 32768.0;
}
// Do something with Audio Buffer
}
};
\`\`\`
{{?}}
`);

View File

@@ -78,7 +78,7 @@ stream = Stream(
### Startup Function
You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating intial responses.
You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating initial responses.
```python
from fastrtc import get_tts_model, Stream, ReplyOnPause
@@ -138,7 +138,7 @@ The API is similar to `ReplyOnPause` with the addition of a `stop_words` paramet
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
!!! tip "Extra Dependencies"
The `ReplyOnStopWords` class requires the the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
The `ReplyOnStopWords` class requires the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
!!! warning "English Only"
The `ReplyOnStopWords` class is currently only supported for English.
@@ -200,7 +200,7 @@ The API is similar to `ReplyOnPause` with the addition of a `stop_words` paramet
It is also possible to create asynchronous stream handlers. This is very convenient for accessing async APIs from major LLM developers, like Google and OpenAI. The main difference is that `receive`, `emit`, and `start_up` are now defined with `async def`.
Here is aa simple example of using `AsyncStreamHandler`:
Here is a simple example of using `AsyncStreamHandler`:
=== "Code"
``` py
@@ -262,7 +262,7 @@ audio = model.tts("Hello, world!")
```
!!! tip
You can customize the audio by passing in an instace of `KokoroTTSOptions` to the method.
You can customize the audio by passing in an instance of `KokoroTTSOptions` to the method.
See [here](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) for a list of available voices.
```python
from fastrtc import KokoroTTSOptions, get_tts_model
@@ -386,3 +386,48 @@ stream.mount(app)
# run with `uvicorn main:app`
```
### Outbound calls with Twilio
Here's a simple example to call someone using the twilio-python module:
```py
app = FastAPI()
@app.post("/call")
async def start_call(req: Request):
body = await req.json()
from_no = body.get("from")
to_no = body.get("to")
account_sid = os.getenv("TWILIO_ACCOUNT_SID")
auth_token = os.getenv("TWILIO_AUTH_TOKEN")
client = Client(account_sid, auth_token)
# Use the public URL of your application
# here we're using ngrok to expose an app
# running locally
call = client.calls.create(
to=to_no,
from_=from_no,
url="https://[your_ngrok_subdomain].ngrok.app/incoming-call"
)
return {"sid": f"{call.sid}"}
@app.api_route("/incoming-call", methods=["GET", "POST"])
async def handle_incoming_call(req: Request):
from twilio.twiml.voice_response import VoiceResponse, Connect
response = VoiceResponse()
response.say("Connecting to AI assistant")
connect = Connect()
connect.stream(url=f'wss://{req.url.hostname}/media-stream')
response.append(connect)
return HTMLResponse(content=str(response), media_type="application/xml")
@app.websocket("/media-stream")
async def handle_media_stream(websocket: WebSocket):
# stream is a FastRTC stream defined elsewhere
await stream.telephone_handler(websocket)
app = gr.mount_gradio_app(app, stream.ui, path="/")
```

View File

@@ -93,4 +93,24 @@ This is common for displaying a multimodal text/audio conversation in a Chatbot
=== "Notes"
1. Pass your data to `AdditionalOutputs` and yield it.
2. In this case, no audio is being returned, so we set `mode="send"`. However, if we set `mode="send-receive"`, we could also yield generated audio and `AdditionalOutputs`.
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
3. The `on_additional_outputs` event does not take `inputs`. It's common practice to not run this event on the queue since it is just a quick UI update.
## Integrated Textbox
For audio usecases, you may want to allow your users to type or speak. You can set the `variant="textbox"` argument in the WebRTC component to place a Textbox with a microphone input in the UI. See the `Integrated Textbox` demo in the cookbook or in the `demo` directory of the github repository.
``` py
webrtc = WebRTC(
modality="audio",
mode="send-receive",
variant="textbox",
)
```
!!! tip "Stream Class"
To use the "textbox" variant via the `Stream` class, set it in the `UIArgs` class and pass it to the stream via the `ui_args` parameter.
<video width=98% src="https://github.com/user-attachments/assets/35c982a1-4a58-4947-af89-7ff287070ef5" controls style="text-align: center"></video>

View File

@@ -40,6 +40,7 @@ and set the `mode="receive"` in the `WebRTC` component.
=== "Code"
``` py title="Server-To-Client"
from fastrtc import Stream
import cv2
def generation():
url = "https://download.tsi.telecom-paristech.fr/gpac/dataset/dash/uhd/mux_sources/hevcds_720p30_2M.mp4"