Skip to content

Commit

Permalink
Add Mistral AI support to the readme file
Browse files Browse the repository at this point in the history
  • Loading branch information
François Bouteruche committed May 14, 2024
1 parent edb58ac commit 4b5da36
Showing 1 changed file with 13 additions and 6 deletions.
19 changes: 13 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,22 @@ Available on Nuget: https://www.nuget.org/packages/Rockhead.Extensions
- Support Command v14 Text and Command v14 Light Text
- `InvokeEmbedV3Async`
- Meta extension methods
- `InvokeLlama2Async`
- `InvokeLlamaAsync`
- An extension method to invoke Llama 2 models to generate text with strongly type parameters and response
- Support Llama 2 13B Chat v1 and Llama 2 70B Chat v1
- `InvokeLlama2WithResponseStreamAsync`
- Support Llama 2 13B Chat v1, Llama 2 70B Chat v1, Llama 3 8B Instruct and Llama 3 70B Instruct
- `InvokeLlamaWithResponseStreamAsync`
- An extension method to invoke Llama 2 models to generate text with strongly type parameters and returning an IAsyncEnumerable of strongly typed response
- Support Llama 2 13B Chat v1 and Llama 2 70B Chat v1
- Support Llama 2 13B Chat v1, Llama 2 70B Chat v1, Llama 3 8B Instruct and Llama 3 70B Instruct
- Stability AI extension methods
- `InvokeStableDiffusionXlForTextToImageAsync`
- An extension method to invoke Stable Diffusion XL to generate images with strongly type parameters and response
- MistralAI extension methods
- `InvokeMistralAsync`
- An extension method to invoke Mistral AI models to generate text with strongly type parameters and response
- Support Mistral AI 7B Instruct, Mistral AI 8x7B Instruct and Mistral Large
- `InvokeMistralWithResponseStreamAsync`
- An extension method to invoke Mistral AI models to generate text with strongly type parameters and returning an IAsyncEnumerable of strongly typed response
- Support Mistral AI 7B Instruct, Mistral AI 8x7B Instruct and Mistral Large

## Setup

Expand Down Expand Up @@ -115,13 +122,13 @@ public async Task<string> GetLlmDescription()
public async Task<string> GetLlmDescription()
{
const string prompt = @"Describe in one sentence what it a large language model";
var config = new Llama2TextGenerationConfig()
var config = new LlamaTextGenerationConfig()
{
MaxGenLen = 2048,
Temperature = 0.8f
};

var response = await BedrockRuntime.InvokeLlama2Async(new Model.Llama270BChatV1(), prompt, config);
var response = await BedrockRuntime.InvokeLlamaAsync(new Model.Llama270BChatV1(), prompt, config);

return response?.Generation ?? "";
}
Expand Down

0 comments on commit 4b5da36

Please sign in to comment.