Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

function calling #11

Merged
merged 13 commits into from
Nov 25, 2024
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -17,3 +17,4 @@ The following recipes have associated recorded content:
| [modushack-data-models](modushack-data-models/) | [ModusHack: Working With Data & AI Models livestream](https://www.youtube.com/watch?v=gB-v7YWwkCw&list=PLzOEKEHv-5e3zgRGzDysyUm8KQklHQQgi&index=3) |
| [modus-press](modus-press/) | Coming soon |
| [dgraph-101](dgraph-101/) | Coming soon |
| [function-calling](function-calling/) | Coming soon |
88 changes: 88 additions & 0 deletions function-calling/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Function Calling With Modus

LLM apis such as OpenAI, have a feature called **function calling** or **tool use**. With this feature the LLM response to a chat message could be a request to invoke a function, usually in order to collect information necessary to generate a response.

This project demonstrates how to setup function calling within [Modus](https://docs.hypermode.com/modus), the open source framework for building intelligent APIs.

The example implements a function `askQuestionToWarehouse` accepting an query in natural language about prices or stock of goods in the warehouse.

The API uses 2 tools available for the LLM
- get_product_types: provide the list of product types we have in the warehouse
- get_product_info: return an info (qty or price) about one product type



## Get started

1- Set your credentials

Create the file `.env.dev.local` in `api-as` folder, containing your OpenAI API key:
```
MODUS_OPENAI_API_KEY="sk-...."
```

2- launch the API

From `api-as` folder launch
```
modus dev
```

3- Test the GraphQL operation
From a GraphQL client (Postman),
Introspect the GraphQL endpoint `http://localhost:8686/graphql`
Invoke the operation `askQuestionToWarehouse`

```graphql
# example query using tool calling

query AskQuestion {
askQuestionToWarehouse(
question: "What is the most expensive product?") {
response
logs
}
}

```
The operation returns the final response and an array of strings showing showing the tool calls and messages exchanged with the LLM API.

Experiment with some queries to see the function calling at work.



```text
# example of questions
What can you do for me?
what fo we have in the warehouse?
How many shoes in stock?
How many shoes and hats do we have in stock?
what is the price of a desks?
What is the most expensive product in stock?

```



## Details

The logic is as follow:

- Instruct the LLM to use function calls (tools) with the correct parameters to get the data necessary to reply to the provided question.

- Execute the identified function calls in Modus to build an additional context (tool messages)

- Re-invoke the LLM API with the additional tool messages.

Return the generated responses based on the data retrieved by the function calls.

## Discussion
Correct prompt helps to address questions that are out of scope.

Descriptions of function and parameters are also part of the prompt engineering!

Enums parameter can help. Try replacing the `product_name` parameter by an Enum type and see that the LLM can skip a function call.

Need a way to avoid loops. That's why we have a limit to 3 calls.

Need to experiment more to understand what are good functions in terms of abstraction, number of parameters etc ...
15 changes: 15 additions & 0 deletions function-calling/api-as/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Ignore macOS system files
.DS_Store

# Ignore environment variable files
.env
.env.*

# Ignore build output directories
build/

# Ignore node_modules folders
node_modules/

# Ignore logs generated by as-test
logs/
3 changes: 3 additions & 0 deletions function-calling/api-as/.prettierrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"plugins": ["assemblyscript-prettier"]
}
6 changes: 6 additions & 0 deletions function-calling/api-as/asconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
{
"extends": "./node_modules/@hypermode/modus-sdk-as/plugin.asconfig.json",
"options": {
"transform": ["@hypermode/modus-sdk-as/transform", "json-as/transform"]
}
}
164 changes: 164 additions & 0 deletions function-calling/api-as/assembly/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
import {
OpenAIChatModel,
Tool,
ToolCall,
SystemMessage,
UserMessage,
ToolMessage,
ResponseFormat,
CompletionMessage
} from "@hypermode/modus-sdk-as/models/openai/chat";
import { EnumParam,StringParam, ObjectParam } from "./params";
import { get_product_info, get_product_types } from "./warehouse";
import { models } from "@hypermode/modus-sdk-as";
import { JSON } from "json-as";

const MODEL_NAME: string = "llm"; // refer to modus.json for the model specs

const DEFAULT_PROMPT = `
You are a warehouse manager only answering questions about the stock and price of products in the warehouse.
If you can't reply, try to use one of the tool to get additional information.
If no tool can help, just explain your role and expected type of questions.
The response should be a single sentence.
Reply to the user question using only the data provided by tools.
If you have a doubt about a product, use the tool to get the list of product names.

`
@json
class ResponseWithLogs {
response: string = "";
logs: string[] = [];
}
export function askQuestionToWarehouse(question: string): ResponseWithLogs {

const model = models.getModel<OpenAIChatModel>(MODEL_NAME);
var logs :string[]=[]
var final_response = ""
var tool_messages :ToolMessage[] = []
var message: CompletionMessage | null = null
var loops = 0
// we loop until we get a response or we reach the maximum number of loops (3)
do {
message = getLLMResponse(model, question, message, tool_messages)
/* do we have a tool call to execute */
if (message.toolCalls.length > 0){
for (var i = 0; i < message.toolCalls.length; i++) {
logs.push(`Calling function : ${message.toolCalls[i].function.name} with ${message.toolCalls[i].function.arguments}`)
}

tool_messages = aggregateToolsResponse(message.toolCalls)
for (i = 0; i < tool_messages.length; i++) {
logs.push(`Tool response : ${tool_messages[i].content}`)
}
} else {
final_response = message.content;
break;
}
} while (loops++ < 2)

return {response: final_response, logs: logs}
}

/**
* Execute the tool calls and return an array of ToolMessage
* containing the response of the tools
*/
function aggregateToolsResponse(toolCalls: ToolCall[]): ToolMessage[] {
var messages :ToolMessage[] = []
for (var i = 0; i < toolCalls.length; i++) {
const content = executeToolCall(toolCalls[i])
const toolCallResponse = new ToolMessage(content,toolCalls[i].id)
messages.push(toolCallResponse)
}
return messages
}

function executeToolCall(toolCall: ToolCall): string {
if (toolCall.function.name == "get_product_list") {
return get_product_types()
} else if (toolCall.function.name == "get_product_info") {
return get_product_info(toolCall.function.arguments)
} else {
return ""
}
}

function getLLMResponse(model: OpenAIChatModel, question: string, last_message: CompletionMessage| null = null, tools_messages: ToolMessage[] = [] ): CompletionMessage {

const input = model.createInput([
new SystemMessage(DEFAULT_PROMPT),
new UserMessage(question),
]);
/*
* adding tools messages (response from tools) to the input
* first we need to add the last completion message so the LLM can match the tool messages with the tool call
*/
if (last_message != null) {
input.messages.push(last_message)
}
for (var i = 0; i < tools_messages.length; i++) {
input.messages.push(tools_messages[i])
}

input.responseFormat = ResponseFormat.Text;
const tools = [
tool_get_product_list(),
tool_get_product_info()
]
input.tools = tools;

input.toolChoice = "auto"; // "auto "required" or "none" or a function in json format

const message = model.invoke(input).choices[0].message
return message
}

/**
* Creates a Tool object that can be used to call the get_product_info function in the warehouse.
* @returns Tool
* set good function and parameter description to help the LLM understand the tool
*/
function tool_get_product_info(): Tool {
const get_product_info = new Tool();
const param = new ObjectParam();

//param.addRequiredProperty("product_name", new EnumParam(["Shoe", "Hat", "Trouser", "Shirt"],"One of the product in the warehouse."));
param.addRequiredProperty("product_name", new StringParam("One of the product in the warehouse like 'Shoe' or 'Hat'."));

param.addRequiredProperty("attribute", new EnumParam(["qty", "price"],"The product information to return"));

get_product_info.function = {
name: "get_product_info",
description: `Get information a product in the warehouse. Call this whenever you need to know the price or stock quantity of a product.`,
// parameters is a string that contains the JSON schema for the parameters that the tool expects.
// valid json schema cannot have commas for the last item in an object or array
// all object in the schema must have "additionalProperties": false
// 'required' is required to be supplied and to be an array including every key in properties
// meaning openai expects all fields to be required
parameters: param.toString(),
strict: true,
};

return get_product_info;
}

/**
* Creates a Tool object that can be used to call the get_product_list function in the warehouse.
* @returns Tool
* set good function and parameter description to help the LLM understand the tool
*/
function tool_get_product_list(): Tool {
const get_product_list = new Tool();
/* this function has no parameters */
get_product_list.function = {
name: "get_product_list",
description: `Get the list of product names in the warehouse. Call this whenever you need to know which product you are able to get information about.`,
parameters: null,
strict: false
};

return get_product_list;
}



88 changes: 88 additions & 0 deletions function-calling/api-as/assembly/params.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
/*
* Helper classes to create parameter json for function calling API
*/
import { JSON } from "json-as";

@json
class Param {
constructor(type: string, description: string| null = null) {
this._type = type;
this._description = description
}
toString(): string {
return JSON.stringify(this);
}

@alias("type")
protected _type: string;

@alias("description")
@omitnull()
protected _description: string | null;

get type(): string {
return this._type;
}

}
@json
export class ObjectParam extends Param {

constructor(description: string| null = null) {
super("object", description);
this.additionalProperties = false;
}

addRequiredProperty(name: string, param: Param): void {
if (this.properties == null) {
this.properties = new Map<string, Param>();
}
this.properties!.set(name, param);

if (this.required == null) {
this.required = [];
}
this.required!.push(name);
}
@omitnull()
properties: Map<string, Param> | null = null;
@omitnull()
required: string[] | null = null;
protected additionalProperties: boolean;
}
@json
export class EnumParam extends Param {
enum: string[];
constructor(enumValues: string[], description: string| null = null) {
super("string", description);
this.enum = enumValues;
}
}
@json
export class StringParam extends Param {
constructor(description: string| null = null) {
super("string", description);
}
}

/* example of parameters value in JSON format



parameters: `{
"type": "object",
"properties": {
"product_name": {
"type": "string",
"enum": ["Shoe", "Hat", "Trouser", "Shirt"]
},
"attribute": {
"type": "string",
"description": "The product information to return",
"enum": ["qty", "price"]
}
},
"required": ["product_name", "attribute"],
"additionalProperties": false
}`,
*/
4 changes: 4 additions & 0 deletions function-calling/api-as/assembly/tsconfig.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
{
"extends": "assemblyscript/std/assembly.json",
"include": ["./**/*.ts"]
}
45 changes: 45 additions & 0 deletions function-calling/api-as/assembly/warehouse.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
/**
* A simple warehouse fake DB with product information.
*/

import { JSON } from "json-as";

class Product {
qty: u32 = 0;
price: string = "";
}

/**
* Get the list of available products.
*/
export function get_product_types(): string {
const product_list = productInfo.keys();

return `The available products are: ${product_list.join(", ")}`
}
/**
* Get the product information for a given product name.
*/
export function get_product_info(string_args: string): string {
const args = JSON.parse<GetProductArguments>(string_args)
if (productInfo.has(args.product_name)) {
const product = productInfo.get(args.product_name)
const value = args.attribute == "qty" ? product.qty.toString() : product.price
return `The ${args.attribute} of ${args.product_name} is ${value}. `
}
return `The product ${args.product_name} is not available. `+ get_product_types();
}
@json
export class GetProductArguments {
product_name: string="";
attribute: string="";
}

/**
* Our fake warehouse DB is a map of product name to product information.
*/
const productInfo: Map<string,Product> = new Map<string,Product>();
productInfo.set("Shoe", {qty: 10, price: "100"});
productInfo.set("Hat", {qty: 20, price: "200"});
productInfo.set("Trouser", {qty: 30, price: "300"});
productInfo.set("Shirt", {qty: 40, price: "400"});
11 changes: 11 additions & 0 deletions function-calling/api-as/eslint.config.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
// @ts-check

import eslint from "@eslint/js";
import tseslint from "typescript-eslint";
import aseslint from "@hypermode/modus-sdk-as/tools/assemblyscript-eslint";

export default tseslint.config(
eslint.configs.recommended,
...tseslint.configs.recommended,
aseslint.config,
);
26 changes: 26 additions & 0 deletions function-calling/api-as/modus.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
{
"$schema": "https://schema.hypermode.com/modus.json",
"endpoints": {
"default": {
"type": "graphql",
"path": "/graphql",
"auth": "bearer-token"
}
},
"models": {
"llm": {
"sourceModel": "gpt-4o",
"connection": "openai",
"path": "v1/chat/completions"
}
},
"connections": {
"openai": {
"type": "http",
"baseUrl": "https://api.openai.com/",
"headers": {
"Authorization": "Bearer {{API_KEY}}"
}
}
}
}
1,779 changes: 1,779 additions & 0 deletions function-calling/api-as/package-lock.json

Large diffs are not rendered by default.

29 changes: 29 additions & 0 deletions function-calling/api-as/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
{
"name": "api-as",
"private": true,
"type": "module",
"scripts": {
"build": "modus-as-build",
"lint": "eslint .",
"pretty": "prettier --write .",
"pretty:check": "prettier --check ."
},
"dependencies": {
"@hypermode/modus-sdk-as": "^0.13.5",
"json-as": "0.9.26"
},
"devDependencies": {
"@eslint/js": "^9.15.0",
"@types/eslint__js": "^8.42.3",
"assemblyscript": "^0.27.31",
"assemblyscript-prettier": "^3.0.1",
"eslint": "^9.15.0",
"prettier": "^3.3.3",
"typescript": "^5.6.3",
"typescript-eslint": "^8.15.0",
"visitor-as": "^0.11.4"
},
"overrides": {
"assemblyscript": "$assemblyscript"
}
}