Skip to content

Commit

Permalink
docs: add release notes for 0.4.0 (#554)
Browse files Browse the repository at this point in the history
also fix #557
  • Loading branch information
Mini256 authored Jan 3, 2025
1 parent 812224f commit e605652
Show file tree
Hide file tree
Showing 9 changed files with 133 additions and 55 deletions.
2 changes: 1 addition & 1 deletion frontend/app/src/pages/docs/embedding-model.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ To use OpenAI-Like embedding model providers, you need to provide the **base URL
}
```

#### ZhipuAI
#### ZhipuAI BigModel

For example, the embedding API endpoint for ZhipuAI is:

Expand Down
2 changes: 1 addition & 1 deletion frontend/app/src/pages/docs/evaluation.mdx
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Evaluation
# Evaluation (beta)

The **Evaluation** module is an integral part of the Chat Engine of the AutoFlow, designed to assess the performance and reliability of the Chat Engine's outputs.

Expand Down
152 changes: 102 additions & 50 deletions frontend/app/src/pages/docs/llm.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,53 +26,105 @@ If you want to use the new LLM while answering user queries, you need switch to

Currently Autoflow supports the following LLM providers:

- [OpenAI](https://platform.openai.com/)
- [Google Gemini](https://gemini.google.com/)
- [Anthropic Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude)
- [Amazon Bedrock](https://aws.amazon.com/bedrock/)
- To use Amazon Bedrock, you'll need to provide a JSON Object of your AWS Credentials, as described in the [AWS CLI config global settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-global):
```json
{
"aws_access_key_id": "****",
"aws_secret_access_key": "****",
"aws_region_name": "us-west-2"
}
```
- [Gitee AI](https://ai.gitee.com/serverless-api)
- And all OpenAI-Like models:
- [OpenRouter](https://openrouter.ai/)
- Default config:
```json
{
"api_base": "https://openrouter.ai/api/v1/"
}
```
- [BigModel](https://open.bigmodel.cn/)
- Default config:
```json
{
"api_base": "https://open.bigmodel.cn/api/paas/v4/",
"is_chat_model": true
}
```
- [Ollama](https://ollama.com/)
- Default config:
```json
{
"api_base": "http://localhost:11434"
}
```
- [vLLM](https://docs.vllm.ai/en/stable/)
- Default config:
```json
{
"api_base": "http://localhost:8000/v1/"
}
```
- [Xinference](https://inference.readthedocs.io/en/latest/index.html)
- Default config:
```json
{
"api_base": "http://localhost:9997/v1/"
}
```
### OpenAI

To learn more about OpenAI, please visit [OpenAI](https://platform.openai.com/).

### Google Gemini

To learn more about Google Gemini, please visit [Google Gemini](https://gemini.google.com/).

### Anthropic Vertex AI

To learn more about Anthropic Vertex AI, please visit [Anthropic Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/partner-models/use-claude)

### Amazon Bedrock

To use Amazon Bedrock, you'll need to provide a JSON Object of your AWS Credentials, as described in the [AWS CLI config global settings](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html#cli-configure-files-global):

```json
{
"aws_access_key_id": "****",
"aws_secret_access_key": "****",
"aws_region_name": "us-west-2"
}
```

To learn more about Amazon Bedrock, please visit [Amazon Bedrock](https://aws.amazon.com/bedrock/).

### Gitee AI

To learn more about Gitee AI, please visit [Gitee AI](https://ai.gitee.com/serverless-api).

### OpenAI-Like

Autoflow also support the providers that conform to the OpenAI API specification.

To use OpenAI-Like LLM providers, you need to provide the **api_base** of the LLM API as the following JSON format in **Advanced Settings**:

```json
{
"api_base": "{api_base_url}"
}
```

#### OpenRouter

Default config:

```json
{
"api_base": "https://openrouter.ai/api/v1/"
}
```

To learn more about OpenRouter, please visit [OpenRouter](https://openrouter.ai/).

#### ZhipuAI BigModel

Default config:

```json
{
"api_base": "https://open.bigmodel.cn/api/paas/v4/",
"is_chat_model": true
}
```

To learn more about BigModel, please visit [BigModel](https://open.bigmodel.cn/).

#### Ollama

Default config:

```json
{
"api_base": "http://localhost:11434"
}
```

To learn more about Ollama, please visit [Ollama](https://ollama.com/).

#### vLLM

Default config:

```json
{
"api_base": "http://localhost:8000/v1/"
}
```

To learn more about vLLM, please visit [vLLM](https://docs.vllm.ai/en/stable/).

#### Xinference

Default config:

```json
{
"api_base": "http://localhost:9997/v1/"
}
```

To learn more about Xinference, please visit [Xinference](https://inference.readthedocs.io/en/latest/).
1 change: 0 additions & 1 deletion frontend/app/src/pages/docs/releases

This file was deleted.

File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,14 @@

- Rename project to `autoflow`
- Multiple Knowledge Bases support
- Support new LLM providers
- [OpenRouter](../llm.mdx#openrouter)
- [ZhipuAI BigModel](../llm.mdx#zhipuai-bigmodel)
- [Ollama](../llm.mdx#ollama)
- Support new embedding models providers
- [Ollama](../embedding-model.mdx#ollama)
- [OpenAI Like](../embedding-model.mdx#openai-like)
- [ZhipuAI](../embedding-model.mdx#zhipuai)
- Support [OpenAI Like](../embedding-model.mdx#openai-like) embedding model providers
- [ZhipuAI BigModel](../embedding-model.mdx#zhipuai-bigmodel)

## Breaking Changes

Expand Down
23 changes: 23 additions & 0 deletions frontend/app/src/pages/docs/releases/v0.4.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
# Release Notes for v0.4.0 (release candidate)

## Highlights

- Support [Evaluation (beta)](../evaluation.mdx) tool to evaluate the performance and reliability of the Chat Engine’s outputs.
- Current support key metrics:
- Factual Correctness
- Semantic Similarity
- Support new LLM providers
- [Gitee AI](../llm.mdx#gitee-ai)
- Test new OpenAI-like providers
- [vLLM](../llm.mdx#vllm)
- [Xinference](../llm.mdx#xinference)
- Support new embedding model providers
- [Gitee AI](../embedding-model.mdx#gitee-ai)
- [Amazon Bedrock](../embedding-model.mdx#amazon-bedrock)

## Improvements

- Limit the upload file size via `max_upload_file_size` (10MB by default) parameter on site setting
- Support download the reference file in the chat page

If you are deploying Autoflow using docker, please follow the [Upgrade](../deploy-with-docker.mdx#upgrade) guide to upgrade your Autoflow.

0 comments on commit e605652

Please sign in to comment.