Skip to content

Commit c42c7f0

Browse files
committed
new docs
1 parent e88c19a commit c42c7f0

File tree

2 files changed

+45
-2
lines changed

2 files changed

+45
-2
lines changed

docs/.vitepress/config.ts

+6-2
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,8 @@ export default defineConfig({
4444
{ text: "Home", link: "/" },
4545
{ text: "Guide", link: "/guide/what-is-dialoqbase" },
4646
{
47-
text: "AI Providers",
48-
link: "/guide/ai-providers/openai",
47+
text: "Use Local AI Models",
48+
link: "/guide/localai-model",
4949
},
5050
{ text: "Self Hosting", link: "/guide/self-hosting" },
5151
],
@@ -132,6 +132,10 @@ export default defineConfig({
132132
text: "AI Providers",
133133
collapsed: false,
134134
items: [
135+
{
136+
text: "Use Local AI Models",
137+
link: "/guide/localai-model",
138+
},
135139
{
136140
text: "OpenAI",
137141
link: "/guide/ai-providers/openai",

docs/guide/localai-model.md

+39
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,39 @@
1+
# Support for Custom AI Models
2+
3+
## Introduction
4+
5+
Starting from version `1.2.0`, our platform provides support for local AI models which are compatible with the OpenAI API. A notable instance for such local AI deployment is the [LocalAI](https://github.com/go-skynet/LocalAI/) project.
6+
7+
This documentation will guide you through the steps needed to integrate a custom model deployed via LocalAI into our system.
8+
9+
## Prerequisites
10+
11+
1. Ensure you've already set up your local AI model using LocalAI or any other platform that offers OpenAI API compatibility.
12+
2. Admin access to our platform is required to add new models.
13+
14+
## Integration Steps
15+
16+
### 1. Login as Administrator
17+
18+
Ensure that you're logged in with administrator rights to have the permissions necessary for this process.
19+
20+
### 2. Navigate to Model Settings
21+
22+
- Go to `Settings`.
23+
- From the side menu, select `Models`.
24+
- Click on `Add Models`.
25+
26+
### 3. Add Your Local or Custom Model URL
27+
28+
You'll be presented with a form asking for details about the model you wish to integrate:
29+
30+
- **URL**: Here, you need to provide the endpoint of your locally deployed AI model.
31+
- If you're using LocalAI and it's running on its default port (8080), you would enter `http://localhost:8080/v1`.
32+
- **Model Selection**: Choose the model you want to use from your local AI instance.
33+
- **Name**: Give your model a recognizable name.
34+
35+
Click `Save` to finalize adding your custom model.
36+
37+
## Conclusion
38+
39+
Once integrated, you can now utilize your custom model in the same way you'd use any other model in the platform. This allows for greater flexibility, especially when working in environments that may benefit from localized data processing, without the need to send data over the internet.

0 commit comments

Comments
 (0)