Skip to content

Commit 1da4dcf

Browse files
nekomeowwwkwaa
andauthored
docs(typescript): add readme (#27)
* docs: unspeech package README.md * chore: update sdk/typescript/README.md * chore: update sdk/typescript/README.md * chore: update README.md * del: update README.md * chore: update README.md --------- Co-authored-by: 藍+85CD <[email protected]>
1 parent df090ee commit 1da4dcf

File tree

2 files changed

+82
-3
lines changed

2 files changed

+82
-3
lines changed

.github/workflows/ci.yaml

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,6 @@ on:
88
branches:
99
- main
1010

11-
env:
12-
STORE_PATH: ""
13-
1411
jobs:
1512
build-test:
1613
name: Build Test

sdk/typescript/README.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# unSpeech TypeScript Client
2+
3+
> Your Text-to-Speech Services, All-in-One.
4+
5+
## Install
6+
7+
```bash
8+
npm i unspeech
9+
```
10+
11+
## Getting Started
12+
13+
### List voices
14+
15+
Besides of the `/audio/speech` endpoint, we support listing all the available voices from providers as well:
16+
17+
```ts
18+
import { createUnSpeech, listVoices } from 'unspeech'
19+
20+
const unspeech = createUnSpeech('YOUR_EXTERNAL_PROVIDER_API_KEY', 'http://localhost:5933/v1/')
21+
22+
const voices = await listVoices(
23+
unspeech.voice({ backend: 'elevenlabs' })
24+
)
25+
```
26+
27+
### Speech synthesis
28+
29+
For general purpose `/audio/speech` requests, `@xsai/generate-speech` or xsAI can be used as it's compatible:
30+
31+
```bash
32+
npm i @xsai/generate-speech
33+
```
34+
35+
```ts
36+
import { generateSpeech } from '@xsai/generate-speech'
37+
import { createUnSpeech } from 'unspeech'
38+
39+
const unspeech = createUnSpeech('YOUR_EXTERNAL_PROVIDER_API_KEY', 'http://localhost:5933/v1/')
40+
const speech = await generateSpeech(
41+
...unspeech.speech('elevenlabs/eleven_multilingual_v2'),
42+
voice: '9BWtsMINqrJLrRacOk9x',
43+
input: 'Hello, World!',
44+
)
45+
```
46+
47+
For the other providers, you can import them as needed
48+
49+
```ts
50+
import {
51+
createUnElevenLabs,
52+
createUnMicrosoft,
53+
createUnSpeech,
54+
createUnAlibabaCloud,
55+
createUnVolcengine
56+
} from 'unspeech'
57+
```
58+
59+
When using
60+
61+
- [Microsoft / Azure AI Speech service](https://learn.microsoft.com/en-us/azure/ai-services/speech-service/text-to-speech)
62+
- [Alibaba Cloud Model Studio / 阿里云百炼 / CosyVoice](https://www.alibabacloud.com/en/product/modelstudio)
63+
- [Volcano Engine / 火山引擎语音技术](https://www.volcengine.com/product/voice-tech)
64+
- [ElevenLabs](https://elevenlabs.io/docs/api-reference/text-to-speech/convert)
65+
66+
providers, [SSML](https://learn.microsoft.com/en-us/azure/ai-services/speech-service/speech-synthesis-markup) is supported to control in fine grain level for pitch, volume, rate, etc.
67+
68+
## Related Projects
69+
70+
Looking for something like unSpeech, but for local TTS? check it out:
71+
72+
- [erew123/alltalk_tts/alltalkbeta](https://github.com/erew123/alltalk_tts/tree/alltalkbeta)
73+
- [astramind-ai/Auralis](https://github.com/astramind-ai/Auralis)
74+
- [matatonic/openedai-speech](https://github.com/matatonic/openedai-speech)
75+
76+
Or to use free Edge TTS:
77+
78+
- [travisvn/openai-edge-tts](https://github.com/travisvn/openai-edge-tts)
79+
80+
## License
81+
82+
[AGPL-3.0](./LICENSE)

0 commit comments

Comments
 (0)