this API demonstrate how to use the Otoroshi LLM Extension in a Cloud APIM Serverless project. The Otoroshi LLM Extension provide tools and plugins to use LLM easily in your projects no matter what provider you're using (on premise, cloud, openai, anthropic, mistral, etc).
the API hosting this demo is located at
https://${environment.HOST}
the source code for this project is available on Cloud APIM's Github. You can fork it and modify it for your own needs.
the project is organized around several steps, one for each possible use case (non exhaustive). The first 2 steps shows how to use otoroshi to consume an LLM API et expose it to your applications in a consistant way no matter what provider is used. The other steps shows how a LLM API can help you configuring your API gateway using natural language.
- Prompt context
- Prompt templating
- Data mockups
- UI mockups
- Access validation
- Data enrichment
- Data cleanup
- Websocket validation
- Data anonymisation
the OpenAPI spec of this API is available here and the viewer for this spec is available here
this project is based on a conference given by Mathieu at Breizhcamp 2024
if you want to achieve the same thing using your own otoroshi instance (on Cloud APIM or on premise), you can use the following configuration file in the otoroshi resource loader
. Do not forget to change the domain names in the routes to match yours and to add an OpenAI token in the local vault named openai-token