From b74c372afdf2c3278079c594d3ef7924e6259ebc Mon Sep 17 00:00:00 2001
From: omahs <73983677+omahs@users.noreply.github.com>
Date: Thu, 29 May 2025 20:27:27 +0200
Subject: [PATCH] Fix typos (#330)
---
README.md | 4 ++--
docs/advanced-configuration.md | 2 +-
docs/deployment.md | 2 +-
docs/index.md | 2 +-
docs/speech_to_text_gallery.md | 4 ++--
docs/text_to_speech_gallery.md | 2 +-
docs/turn_taking_gallery.md | 2 +-
docs/userguide/audio.md | 8 ++++----
8 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/README.md b/README.md
index 8dacfc5..f31fa10 100644
--- a/README.md
+++ b/README.md
@@ -35,7 +35,7 @@ pip install "fastrtc[vad, tts]"
- 🔌 Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!
- ⚡️ Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!
- 📞 Automatic Telephone Support - Use the `fastphone()` method of the stream to launch the application and get a free temporary phone number!
-- 🤖 Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example on how to serve a custom JS frontend.
+- 🤖 Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example of how to serve a custom JS frontend.
## Docs
@@ -153,7 +153,7 @@ See the [Cookbook](https://fastrtc.org/cookbook/) for examples of how to use the
## Usage
-This is an shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
+This is a shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
- `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/).
- `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.
diff --git a/docs/advanced-configuration.md b/docs/advanced-configuration.md
index 7e3fd8b..615ab79 100644
--- a/docs/advanced-configuration.md
+++ b/docs/advanced-configuration.md
@@ -108,7 +108,7 @@ stream = Stream(
## Audio Icon
You can display an icon of your choice instead of the default wave animation for audio streaming.
-Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or recevied (depending on the `mode` parameter) a pulse animation will emanate from the button.
+Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or received (depending on the `mode` parameter) a pulse animation will emanate from the button.
You can control the button color and pulse color with `icon_button_color` and `pulse_color` parameters. They can take any valid css color.
diff --git a/docs/deployment.md b/docs/deployment.md
index 41bfc4a..0b88e3b 100644
--- a/docs/deployment.md
+++ b/docs/deployment.md
@@ -89,7 +89,7 @@ Stream(
An easy way to do this is to use a service like Twilio.
-Create a **free** [account](https://login.twilio.com/u/signup) and the install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
+Create a **free** [account](https://login.twilio.com/u/signup) and then install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
```python
from fastrtc import Stream
diff --git a/docs/index.md b/docs/index.md
index 23d1b83..58a6dba 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -184,7 +184,7 @@ Learn more about the [Stream](userguide/streams) in the user guide.
## Examples
See the [cookbook](/cookbook).
-Follow and join or [organization](https://huggingface.co/fastrtc) on Hugging Face!
+Follow and join our [organization](https://huggingface.co/fastrtc) on Hugging Face!

diff --git a/docs/speech_to_text_gallery.md b/docs/speech_to_text_gallery.md
index d0c03d2..2fb5dcc 100644
--- a/docs/speech_to_text_gallery.md
+++ b/docs/speech_to_text_gallery.md
@@ -53,7 +53,7 @@ document.querySelectorAll('.tag-button').forEach(button => {
---
Description:
- [Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wraped in a pypi package for plug and play!
+ [Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wrapped in a pypi package for plug and play!
Install Instructions
```python
@@ -131,4 +131,4 @@ document.querySelectorAll('.tag-button').forEach(button => {
stream.ui.launch()
```
-3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
+3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.
diff --git a/docs/text_to_speech_gallery.md b/docs/text_to_speech_gallery.md
index 89d13ab..68b7a80 100644
--- a/docs/text_to_speech_gallery.md
+++ b/docs/text_to_speech_gallery.md
@@ -125,4 +125,4 @@ document.querySelectorAll('.tag-button').forEach(button => {
stream.ui.launch()
```
-3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/text_to_speech_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so other can try it out easily.
+3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/text_to_speech_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.
diff --git a/docs/turn_taking_gallery.md b/docs/turn_taking_gallery.md
index bc4a0ba..c0b94b1 100644
--- a/docs/turn_taking_gallery.md
+++ b/docs/turn_taking_gallery.md
@@ -165,7 +165,7 @@ In this gallery, you can find a collection of turn-taking algorithms and VAD mod
stream.ui.launch()
```
-3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
+3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.
!!! tip "Package Naming Convention"
It is recommended to name your package `fastrtc-
` so developers can easily find it on [pypi](https://pypi.org/search/?q=fastrtc-).
diff --git a/docs/userguide/audio.md b/docs/userguide/audio.md
index aeff0d8..a5c1400 100644
--- a/docs/userguide/audio.md
+++ b/docs/userguide/audio.md
@@ -78,7 +78,7 @@ stream = Stream(
### Startup Function
-You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating intial responses.
+You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating initial responses.
```python
from fastrtc import get_tts_model, Stream, ReplyOnPause
@@ -138,7 +138,7 @@ The API is similar to `ReplyOnPause` with the addition of a `stop_words` paramet
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
!!! tip "Extra Dependencies"
- The `ReplyOnStopWords` class requires the the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
+ The `ReplyOnStopWords` class requires the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
!!! warning "English Only"
The `ReplyOnStopWords` class is currently only supported for English.
@@ -200,7 +200,7 @@ The API is similar to `ReplyOnPause` with the addition of a `stop_words` paramet
It is also possible to create asynchronous stream handlers. This is very convenient for accessing async APIs from major LLM developers, like Google and OpenAI. The main difference is that `receive`, `emit`, and `start_up` are now defined with `async def`.
-Here is aa simple example of using `AsyncStreamHandler`:
+Here is a simple example of using `AsyncStreamHandler`:
=== "Code"
``` py
@@ -262,7 +262,7 @@ audio = model.tts("Hello, world!")
```
!!! tip
- You can customize the audio by passing in an instace of `KokoroTTSOptions` to the method.
+ You can customize the audio by passing in an instance of `KokoroTTSOptions` to the method.
See [here](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) for a list of available voices.
```python
from fastrtc import KokoroTTSOptions, get_tts_model