Fix typos (#330)

This commit is contained in:
omahs
2025-05-29 20:27:27 +02:00
committed by GitHub
parent 6f02a2f2a9
commit b74c372afd
8 changed files with 13 additions and 13 deletions

View File

@@ -35,7 +35,7 @@ pip install "fastrtc[vad, tts]"
- 🔌 Automatic WebRTC Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a webRTC endpoint for your own frontend!
- ⚡️ Websocket Support - Use the `.mount(app)` method to mount the stream on a FastAPI app and get a websocket endpoint for your own frontend!
- 📞 Automatic Telephone Support - Use the `fastphone()` method of the stream to launch the application and get a free temporary phone number!
- 🤖 Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example on how to serve a custom JS frontend.
- 🤖 Completely customizable backend - A `Stream` can easily be mounted on a FastAPI app so you can easily extend it to fit your production application. See the [Talk To Claude](https://huggingface.co/spaces/fastrtc/talk-to-claude) demo for an example of how to serve a custom JS frontend.
## Docs
@@ -153,7 +153,7 @@ See the [Cookbook](https://fastrtc.org/cookbook/) for examples of how to use the
## Usage
This is an shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
This is a shortened version of the official [usage guide](https://freddyaboulton.github.io/gradio-webrtc/user-guide/).
- `.ui.launch()`: Launch a built-in UI for easily testing and sharing your stream. Built with [Gradio](https://www.gradio.app/).
- `.fastphone()`: Get a free temporary phone number to call into your stream. Hugging Face token required.

View File

@@ -108,7 +108,7 @@ stream = Stream(
## Audio Icon
You can display an icon of your choice instead of the default wave animation for audio streaming.
Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or recevied (depending on the `mode` parameter) a pulse animation will emanate from the button.
Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or received (depending on the `mode` parameter) a pulse animation will emanate from the button.
You can control the button color and pulse color with `icon_button_color` and `pulse_color` parameters. They can take any valid css color.

View File

@@ -89,7 +89,7 @@ Stream(
An easy way to do this is to use a service like Twilio.
Create a **free** [account](https://login.twilio.com/u/signup) and the install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
Create a **free** [account](https://login.twilio.com/u/signup) and then install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
```python
from fastrtc import Stream

View File

@@ -184,7 +184,7 @@ Learn more about the [Stream](userguide/streams) in the user guide.
## Examples
See the [cookbook](/cookbook).
Follow and join or [organization](https://huggingface.co/fastrtc) on Hugging Face!
Follow and join our [organization](https://huggingface.co/fastrtc) on Hugging Face!
<div style="display: flex; flex-direction: row; justify-content: center; align-items: center; max-width: 600px; margin: 0 auto;">
<img style="display: block; height: 100px; margin-right: 20px;" src="/hf-logo-with-title.svg">

View File

@@ -53,7 +53,7 @@ document.querySelectorAll('.tag-button').forEach(button => {
---
Description:
[Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wraped in a pypi package for plug and play!
[Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wrapped in a pypi package for plug and play!
Install Instructions
```python
@@ -131,4 +131,4 @@ document.querySelectorAll('.tag-button').forEach(button => {
stream.ui.launch()
```
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.

View File

@@ -125,4 +125,4 @@ document.querySelectorAll('.tag-button').forEach(button => {
stream.ui.launch()
```
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/text_to_speech_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so other can try it out easily.
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/text_to_speech_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.

View File

@@ -165,7 +165,7 @@ In this gallery, you can find a collection of turn-taking algorithms and VAD mod
stream.ui.launch()
```
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.
!!! tip "Package Naming Convention"
It is recommended to name your package `fastrtc-<package-name>` so developers can easily find it on [pypi](https://pypi.org/search/?q=fastrtc-).

View File

@@ -78,7 +78,7 @@ stream = Stream(
### Startup Function
You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating intial responses.
You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating initial responses.
```python
from fastrtc import get_tts_model, Stream, ReplyOnPause
@@ -138,7 +138,7 @@ The API is similar to `ReplyOnPause` with the addition of a `stop_words` paramet
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
!!! tip "Extra Dependencies"
The `ReplyOnStopWords` class requires the the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
The `ReplyOnStopWords` class requires the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
!!! warning "English Only"
The `ReplyOnStopWords` class is currently only supported for English.
@@ -200,7 +200,7 @@ The API is similar to `ReplyOnPause` with the addition of a `stop_words` paramet
It is also possible to create asynchronous stream handlers. This is very convenient for accessing async APIs from major LLM developers, like Google and OpenAI. The main difference is that `receive`, `emit`, and `start_up` are now defined with `async def`.
Here is aa simple example of using `AsyncStreamHandler`:
Here is a simple example of using `AsyncStreamHandler`:
=== "Code"
``` py
@@ -262,7 +262,7 @@ audio = model.tts("Hello, world!")
```
!!! tip
You can customize the audio by passing in an instace of `KokoroTTSOptions` to the method.
You can customize the audio by passing in an instance of `KokoroTTSOptions` to the method.
See [here](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) for a list of available voices.
```python
from fastrtc import KokoroTTSOptions, get_tts_model