mirror of
https://github.com/HumanAIGC-Engineering/gradio-webrtc.git
synced 2026-02-05 09:59:22 +08:00
Fix typos (#330)
This commit is contained in:
@@ -108,7 +108,7 @@ stream = Stream(
|
||||
## Audio Icon
|
||||
|
||||
You can display an icon of your choice instead of the default wave animation for audio streaming.
|
||||
Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or recevied (depending on the `mode` parameter) a pulse animation will emanate from the button.
|
||||
Pass any local path or url to an image (svg, png, jpeg) to the components `icon` parameter. This will display the icon as a circular button. When audio is sent or received (depending on the `mode` parameter) a pulse animation will emanate from the button.
|
||||
|
||||
You can control the button color and pulse color with `icon_button_color` and `pulse_color` parameters. They can take any valid css color.
|
||||
|
||||
|
||||
@@ -89,7 +89,7 @@ Stream(
|
||||
|
||||
An easy way to do this is to use a service like Twilio.
|
||||
|
||||
Create a **free** [account](https://login.twilio.com/u/signup) and the install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
|
||||
Create a **free** [account](https://login.twilio.com/u/signup) and then install the `twilio` package with pip (`pip install twilio`). You can then connect from the WebRTC component like so:
|
||||
|
||||
```python
|
||||
from fastrtc import Stream
|
||||
|
||||
@@ -184,7 +184,7 @@ Learn more about the [Stream](userguide/streams) in the user guide.
|
||||
## Examples
|
||||
See the [cookbook](/cookbook).
|
||||
|
||||
Follow and join or [organization](https://huggingface.co/fastrtc) on Hugging Face!
|
||||
Follow and join our [organization](https://huggingface.co/fastrtc) on Hugging Face!
|
||||
|
||||
<div style="display: flex; flex-direction: row; justify-content: center; align-items: center; max-width: 600px; margin: 0 auto;">
|
||||
<img style="display: block; height: 100px; margin-right: 20px;" src="/hf-logo-with-title.svg">
|
||||
|
||||
@@ -53,7 +53,7 @@ document.querySelectorAll('.tag-button').forEach(button => {
|
||||
---
|
||||
|
||||
Description:
|
||||
[Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wraped in a pypi package for plug and play!
|
||||
[Distil-whisper](https://github.com/huggingface/distil-whisper) from Hugging Face wrapped in a pypi package for plug and play!
|
||||
|
||||
Install Instructions
|
||||
```python
|
||||
@@ -131,4 +131,4 @@ document.querySelectorAll('.tag-button').forEach(button => {
|
||||
stream.ui.launch()
|
||||
```
|
||||
|
||||
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
|
||||
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/speech_to_text_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.
|
||||
|
||||
@@ -125,4 +125,4 @@ document.querySelectorAll('.tag-button').forEach(button => {
|
||||
stream.ui.launch()
|
||||
```
|
||||
|
||||
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/text_to_speech_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so other can try it out easily.
|
||||
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/text_to_speech_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.
|
||||
|
||||
@@ -165,7 +165,7 @@ In this gallery, you can find a collection of turn-taking algorithms and VAD mod
|
||||
stream.ui.launch()
|
||||
```
|
||||
|
||||
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally you model package should be pip installable so other can try it out easily.
|
||||
3. Open a [PR](https://github.com/freddyaboulton/fastrtc/edit/main/docs/turn_taking_gallery.md) to add your model to the gallery! Ideally your model package should be pip installable so others can try it out easily.
|
||||
|
||||
!!! tip "Package Naming Convention"
|
||||
It is recommended to name your package `fastrtc-<package-name>` so developers can easily find it on [pypi](https://pypi.org/search/?q=fastrtc-).
|
||||
|
||||
@@ -78,7 +78,7 @@ stream = Stream(
|
||||
|
||||
### Startup Function
|
||||
|
||||
You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating intial responses.
|
||||
You can pass in a `startup_fn` to the `ReplyOnPause` class. This function will be called when the connection is first established. It is helpful for generating initial responses.
|
||||
|
||||
```python
|
||||
from fastrtc import get_tts_model, Stream, ReplyOnPause
|
||||
@@ -138,7 +138,7 @@ The API is similar to `ReplyOnPause` with the addition of a `stop_words` paramet
|
||||
1. The `stop_words` can be single words or pairs of words. Be sure to include common misspellings of your word for more robust detection, e.g. "llama", "lamma". In my experience, it's best to use two very distinct words like "ok computer" or "hello iris".
|
||||
|
||||
!!! tip "Extra Dependencies"
|
||||
The `ReplyOnStopWords` class requires the the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
|
||||
The `ReplyOnStopWords` class requires the `stopword` extra. Run `pip install fastrtc[stopword]` to install it.
|
||||
|
||||
!!! warning "English Only"
|
||||
The `ReplyOnStopWords` class is currently only supported for English.
|
||||
@@ -200,7 +200,7 @@ The API is similar to `ReplyOnPause` with the addition of a `stop_words` paramet
|
||||
|
||||
It is also possible to create asynchronous stream handlers. This is very convenient for accessing async APIs from major LLM developers, like Google and OpenAI. The main difference is that `receive`, `emit`, and `start_up` are now defined with `async def`.
|
||||
|
||||
Here is aa simple example of using `AsyncStreamHandler`:
|
||||
Here is a simple example of using `AsyncStreamHandler`:
|
||||
|
||||
=== "Code"
|
||||
``` py
|
||||
@@ -262,7 +262,7 @@ audio = model.tts("Hello, world!")
|
||||
```
|
||||
|
||||
!!! tip
|
||||
You can customize the audio by passing in an instace of `KokoroTTSOptions` to the method.
|
||||
You can customize the audio by passing in an instance of `KokoroTTSOptions` to the method.
|
||||
See [here](https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md) for a list of available voices.
|
||||
```python
|
||||
from fastrtc import KokoroTTSOptions, get_tts_model
|
||||
|
||||
Reference in New Issue
Block a user