Skip to content

Commit

Permalink
Merge pull request #68 from meysamhadeli/chore/update-default-configs
Browse files Browse the repository at this point in the history
chore: update default configs
  • Loading branch information
meysamhadeli authored Nov 15, 2024
2 parents 5478f89 + 61d05b3 commit a57911a
Show file tree
Hide file tree
Showing 2 changed files with 34 additions and 12 deletions.
38 changes: 30 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,22 +47,44 @@ For `PowerShell`, use:
$env:API_KEY="your_api_key""
```
### 🔧 Configuration
`codai` requires a `config.yml` file in the root of your working directory to analyze your project. By default, the `config.yml` contains the following values:
`codai` requires a `config.yml` file in the `root of your working directory` or using `environment variables` to set below configs `globally` as a configuration.

By default codai config works with `openai` provider and the `config.yml` contains the following values:

**config.yml - openai sample**
```yml
ai_provider_config:
provider_name: "openai"
chat_completion_url: "http://localhost:11434/v1/chat/completions"
provider_name: "openai"
chat_completion_url: "https://api.openai.com/v1/chat/completions"
chat_completion_model: "gpt-4o"
embedding_url: "http://localhost:11434/v1/embeddings" (Optional, If you want use RAG.)
embedding_model: "text-embedding-3-small" (Optional, If you want use RAG.)
embedding_url: "https://api.openai.com/v1/embeddings" #(Optional, If you want use RAG.)
embedding_model: "text-embedding-3-small" #(Optional, If you want use RAG.)
temperature: 0.2
threshold: 0.3 (Optional, If you want use RAG.)
threshold: 0.3 #(Optional, If you want use RAG.)
theme: "dracula"
rag: true (Optional, If you want use RAG.)
rag: true #(Optional, If you want use RAG.)
```
Also, to provide the config for the `ollama` provider and the `config.yml` contains the following values:

**config.yml - ollama sample**

```yml
ai_provider_config:
provider_name: "ollama"
chat_completion_url: "http://localhost:11434/v1/chat/completions"
chat_completion_model: "llama3.1"
embedding_url: "http://localhost:11434/v1/embeddings" #(Optional, If you want use RAG.)
embedding_model: "all-minilm:l6-v2" #(Optional, If you want use RAG.)
temperature: 0.2
threshold: 0.3 #(Optional, If you want use RAG.)
theme: "dracula"
rag: true #(Optional, If you want use RAG.)
```

> Note: We used the standard integration of [OpenAI APIs](https://platform.openai.com/docs/api-reference/introduction) and [Ollama APIs](https://github.com/ollama/ollama/blob/main/docs/api.md) and you can find more details in documentation of each APIs.

If you wish to customize your configuration, you can create your own `config.yml` file and place it in the `root directory` of each project you want to analyze with codai. If no configuration file is provided, codai will use the default settings.
If you wish to customize your configuration, you can create your own `config.yml` file and place it in the `root directory` of `each project` you want to analyze with codai. If `no configuration` file is provided, codai will use the `default settings`.

You can also specify a configuration file from any directory by using the following CLI command:
```bash
Expand Down
8 changes: 4 additions & 4 deletions config/config.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,13 @@ type Config struct {

// Default configuration values
var defaultConfig = Config{
Version: "1.0",
Version: "1.6.3",
Theme: "dracula",
RAG: true,
AIProviderConfig: &providers.AIProviderConfig{
ProviderName: "openai",
EmbeddingURL: "http://localhost:11434/v1/embeddings",
ChatCompletionURL: "http://localhost:11434/v1/chat/completions",
EmbeddingURL: "https://api.openai.com/v1/embeddings",
ChatCompletionURL: "https://api.openai.com/v1/chat/completions",
ChatCompletionModel: "gpt-4o",
EmbeddingModel: "text-embedding-3-small",
Stream: true,
Expand Down Expand Up @@ -133,7 +133,7 @@ func InitFlags(rootCmd *cobra.Command) {
rootCmd.PersistentFlags().String("theme", defaultConfig.Theme, "Set customize theme for buffering response from ai. (e.g., 'dracula', 'light', 'dark')")
rootCmd.PersistentFlags().Bool("rag", defaultConfig.RAG, "Enable Retrieval-Augmented Generation (RAG) for enhanced responses using relevant data retrieval (e.g., default is 'enabled' and just retrieve related context base on user request).")
rootCmd.PersistentFlags().StringP("version", "v", defaultConfig.Version, "Specifies the version of the application or service. This helps to track the release or update of the software.")
rootCmd.PersistentFlags().StringP("provider_name", "p", defaultConfig.AIProviderConfig.ProviderName, "Specifies the name of the AI service provider (e.g., 'openai'). This determines which service or API will be used for AI-related functions.")
rootCmd.PersistentFlags().StringP("provider_name", "p", defaultConfig.AIProviderConfig.ProviderName, "Specifies the name of the AI service provider (e.g., 'openai' or 'ollama'). This determines which service or API will be used for AI-related functions.")
rootCmd.PersistentFlags().String("embedding_url", defaultConfig.AIProviderConfig.EmbeddingURL, "The API endpoint used for text embedding requests. This URL points to the server that processes and returns text embeddings.")
rootCmd.PersistentFlags().String("chat_completion_url", defaultConfig.AIProviderConfig.ChatCompletionURL, "The API endpoint for chat completion requests. This URL is where chat messages are sent to receive AI-generated responses.")
rootCmd.PersistentFlags().String("chat_completion_model", defaultConfig.AIProviderConfig.ChatCompletionModel, "The name of the model used for chat completions, such as 'gpt-4o'. Different models offer varying levels of performance and capabilities.")
Expand Down

0 comments on commit a57911a

Please sign in to comment.