forked from run-llama/LlamaIndexTS
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Co-authored-by: Thuc Pham <[email protected]>
- Loading branch information
1 parent
9c5ff16
commit 11feef8
Showing
16 changed files
with
1,074 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
--- | ||
"@llamaindex/core": minor | ||
"llamaindex": minor | ||
"@llamaindex/examples": patch | ||
--- | ||
|
||
Add workflows |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,168 @@ | ||
import CodeBlock from "@theme/CodeBlock"; | ||
import CodeSource from "!raw-loader!../../../../examples/workflow/joke.ts"; | ||
|
||
# Workflows | ||
|
||
A `Workflow` in LlamaIndexTS is an event-driven abstraction used to chain together several events. Workflows are made up of `steps`, with each step responsible for handling certain event types and emitting new events. | ||
|
||
Workflows in LlamaIndexTS work by defining step functions that handle specific event types and emit new events. | ||
|
||
When a step function is added to a workflow, you need to specify the input and optionally the output event types (used for validation). The specification of the input events ensures each step only runs when an accepted event is ready. | ||
|
||
You can create a `Workflow` to do anything! Build an agent, a RAG flow, an extraction flow, or anything else you want. | ||
|
||
## Getting Started | ||
|
||
As an illustrative example, let's consider a naive workflow where a joke is generated and then critiqued. | ||
|
||
<CodeBlock language="ts">{CodeSource}</CodeBlock> | ||
|
||
There's a few moving pieces here, so let's go through this piece by piece. | ||
|
||
### Defining Workflow Events | ||
|
||
```typescript | ||
export class JokeEvent extends WorkflowEvent<{ joke: string }> {} | ||
``` | ||
|
||
Events are user-defined classes that extend `WorkflowEvent` and contain arbitrary data provided as template argument. In this case, our workflow relies on a single user-defined event, the `JokeEvent` with a `joke` attribute of type `string`. | ||
|
||
### Setting up the Workflow Class | ||
|
||
```typescript | ||
const llm = new OpenAI(); | ||
... | ||
const jokeFlow = new Workflow({ verbose: true }); | ||
``` | ||
|
||
Our workflow is implemented by initiating the `Workflow` class. For simplicity, we created a `OpenAI` llm instance. | ||
|
||
### Workflow Entry Points | ||
|
||
```typescript | ||
const generateJoke = async (_context: Context, ev: StartEvent) => { | ||
const prompt = `Write your best joke about ${ev.data.input}.`; | ||
const response = await llm.complete({ prompt }); | ||
return new JokeEvent({ joke: response.text }); | ||
}; | ||
``` | ||
|
||
Here, we come to the entry-point of our workflow. While events are user-defined, there are two special-case events, the `StartEvent` and the `StopEvent`. Here, the `StartEvent` signifies where to send the initial workflow input. | ||
|
||
The `StartEvent` is a bit of a special object since it can hold arbitrary attributes. Here, we accessed the topic with `ev.data.input`. | ||
|
||
At this point, you may have noticed that we haven't explicitly told the workflow what events are handled by which steps. | ||
|
||
To do so, we use the `addStep` method which adds a step to the workflow. The first argument is the event type that the step will handle, and the second argument is the previously defined step function: | ||
|
||
```typescript | ||
jokeFlow.addStep(StartEvent, generateJoke); | ||
``` | ||
|
||
### Workflow Exit Points | ||
|
||
```typescript | ||
const critiqueJoke = async (_context: Context, ev: JokeEvent) => { | ||
const prompt = `Give a thorough critique of the following joke: ${ev.data.joke}`; | ||
const response = await llm.complete({ prompt }); | ||
return new StopEvent({ result: response.text }); | ||
}; | ||
``` | ||
|
||
Here, we have our second, and last step, in the workflow. We know its the last step because the special `StopEvent` is returned. When the workflow encounters a returned `StopEvent`, it immediately stops the workflow and returns whatever the result was. | ||
|
||
In this case, the result is a string, but it could be a map, array, or any other object. | ||
|
||
Don't forget to add the step to the workflow: | ||
|
||
```typescript | ||
jokeFlow.addStep(JokeEvent, critiqueJoke); | ||
``` | ||
|
||
### Running the Workflow | ||
|
||
```typescript | ||
const result = await jokeFlow.run("pirates"); | ||
console.log(result.data.result); | ||
``` | ||
|
||
Lastly, we run the workflow. The `.run()` method is async, so we use await here to wait for the result. | ||
|
||
### Validating Workflows | ||
|
||
To tell the workflow what events are produced by each step, you can optionally provide a third argument to `addStep` to specify the output event type: | ||
|
||
```typescript | ||
jokeFlow.addStep(StartEvent, generateJoke, { outputs: JokeEvent }); | ||
jokeFlow.addStep(JokeEvent, critiqueJoke, { outputs: StopEvent }); | ||
``` | ||
|
||
To validate a workflow, you need to call the `validate` method: | ||
|
||
```typescript | ||
jokeFlow.validate(); | ||
``` | ||
|
||
To automatically validate a workflow when you run it, you can set the `validate` flag to `true` at initialization: | ||
|
||
```typescript | ||
const jokeFlow = new Workflow({ verbose: true, validate: true }); | ||
``` | ||
|
||
## Working with Global Context/State | ||
|
||
Optionally, you can choose to use global context between steps. For example, maybe multiple steps access the original `query` input from the user. You can store this in global context so that every step has access. | ||
|
||
```typescript | ||
import { Context } from "@llamaindex/core/workflow"; | ||
|
||
const query = async (context: Context, ev: MyEvent) => { | ||
// get the query from the context | ||
const query = context.get("query"); | ||
// do something with context and event | ||
const val = ... | ||
const result = ... | ||
// store in context | ||
context.set("key", val); | ||
|
||
return new StopEvent({ result }); | ||
}; | ||
``` | ||
|
||
## Waiting for Multiple Events | ||
|
||
The context does more than just hold data, it also provides utilities to buffer and wait for multiple events. | ||
|
||
For example, you might have a step that waits for a query and retrieved nodes before synthesizing a response: | ||
|
||
```typescript | ||
const synthesize = async (context: Context, ev: QueryEvent | RetrieveEvent) => { | ||
const events = context.collectEvents(ev, [QueryEvent | RetrieveEvent]); | ||
if (!events) { | ||
return; | ||
} | ||
const prompt = events | ||
.map((event) => { | ||
if (event instanceof QueryEvent) { | ||
return `Answer this query using the context provided: ${event.data.query}`; | ||
} else if (event instanceof RetrieveEvent) { | ||
return `Context: ${event.data.context}`; | ||
} | ||
return ""; | ||
}) | ||
.join("\n"); | ||
|
||
const response = await llm.complete({ prompt }); | ||
return new StopEvent({ result: response.text }); | ||
}; | ||
``` | ||
|
||
Using `ctx.collectEvents()` we can buffer and wait for ALL expected events to arrive. This function will only return events (in the requested order) once all events have arrived. | ||
|
||
## Manually Triggering Events | ||
|
||
Normally, events are triggered by returning another event during a step. However, events can also be manually dispatched using the `ctx.sendEvent(event)` method within a workflow. | ||
|
||
## Examples | ||
|
||
You can find many useful examples of using workflows in the [examples folder](https://github.com/run-llama/LlamaIndexTS/blob/main/examples/workflow). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
import { OpenAI } from "llamaindex"; | ||
|
||
(async () => { | ||
const llm = new OpenAI({ model: "o1-preview", temperature: 1 }); | ||
|
||
const prompt = `What are three compounds we should consider investigating to advance research | ||
into new antibiotics? Why should we consider them? | ||
`; | ||
|
||
// complete api | ||
const response = await llm.complete({ prompt }); | ||
console.log(response.text); | ||
})(); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
# Workflow Examples | ||
|
||
These examples demonstrate LlamaIndexTS's workflow system. Check out [its documentation](https://ts.llamaindex.ai/modules/workflows) for more information. | ||
|
||
## Running the Examples | ||
|
||
To run the examples, make sure to run them from the parent folder called `examples`). For example, to run the joke workflow, run `npx tsx workflow/joke.ts`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,122 @@ | ||
import { | ||
Context, | ||
StartEvent, | ||
StopEvent, | ||
Workflow, | ||
WorkflowEvent, | ||
} from "@llamaindex/core/workflow"; | ||
import { OpenAI } from "llamaindex"; | ||
|
||
const MAX_REVIEWS = 3; | ||
|
||
// Using the o1-preview model (see https://platform.openai.com/docs/guides/reasoning?reasoning-prompt-examples=coding-planning) | ||
const llm = new OpenAI({ model: "o1-preview", temperature: 1 }); | ||
|
||
// example specification from https://platform.openai.com/docs/guides/reasoning?reasoning-prompt-examples=coding-planning | ||
const specification = `Python app that takes user questions and looks them up in a | ||
database where they are mapped to answers. If there is a close match, it retrieves | ||
the matched answer. If there isn't, it asks the user to provide an answer and | ||
stores the question/answer pair in the database.`; | ||
|
||
// Create custom event types | ||
export class MessageEvent extends WorkflowEvent<{ msg: string }> {} | ||
export class CodeEvent extends WorkflowEvent<{ code: string }> {} | ||
export class ReviewEvent extends WorkflowEvent<{ | ||
review: string; | ||
code: string; | ||
}> {} | ||
|
||
// Helper function to truncate long strings | ||
const truncate = (str: string) => { | ||
const MAX_LENGTH = 60; | ||
if (str.length <= MAX_LENGTH) return str; | ||
return str.slice(0, MAX_LENGTH) + "..."; | ||
}; | ||
|
||
// the architect is responsible for writing the structure and the initial code based on the specification | ||
const architect = async (context: Context, ev: StartEvent) => { | ||
// get the specification from the start event and save it to context | ||
context.set("specification", ev.data.input); | ||
const spec = context.get("specification"); | ||
// write a message to send an update to the user | ||
context.writeEventToStream( | ||
new MessageEvent({ | ||
msg: `Writing app using this specification: ${truncate(spec)}`, | ||
}), | ||
); | ||
const prompt = `Build an app for this specification: <spec>${spec}</spec>. Make a plan for the directory structure you'll need, then return each file in full. Don't supply any reasoning, just code.`; | ||
const code = await llm.complete({ prompt }); | ||
return new CodeEvent({ code: code.text }); | ||
}; | ||
|
||
// the coder is responsible for updating the code based on the review | ||
const coder = async (context: Context, ev: ReviewEvent) => { | ||
// get the specification from the context | ||
const spec = context.get("specification"); | ||
// get the latest review and code | ||
const { review, code } = ev.data; | ||
// write a message to send an update to the user | ||
context.writeEventToStream( | ||
new MessageEvent({ | ||
msg: `Update code based on review: ${truncate(review)}`, | ||
}), | ||
); | ||
const prompt = `We need to improve code that should implement this specification: <spec>${spec}</spec>. Here is the current code: <code>${code}</code>. And here is a review of the code: <review>${review}</review>. Improve the code based on the review, keep the specification in mind, and return the full updated code. Don't supply any reasoning, just code.`; | ||
const updatedCode = await llm.complete({ prompt }); | ||
return new CodeEvent({ code: updatedCode.text }); | ||
}; | ||
|
||
// the reviewer is responsible for reviewing the code and providing feedback | ||
const reviewer = async (context: Context, ev: CodeEvent) => { | ||
// get the specification from the context | ||
const spec = context.get("specification"); | ||
// get latest code from the event | ||
const { code } = ev.data; | ||
// update and check the number of reviews | ||
const numberReviews = context.get("numberReviews", 0) + 1; | ||
context.set("numberReviews", numberReviews); | ||
if (numberReviews > MAX_REVIEWS) { | ||
// the we've done this too many times - return the code | ||
context.writeEventToStream( | ||
new MessageEvent({ | ||
msg: `Already reviewed ${numberReviews - 1} times, stopping!`, | ||
}), | ||
); | ||
return new StopEvent({ result: code }); | ||
} | ||
// write a message to send an update to the user | ||
context.writeEventToStream( | ||
new MessageEvent({ msg: `Review #${numberReviews}: ${truncate(code)}` }), | ||
); | ||
const prompt = `Review this code: <code>${code}</code>. Check if the code quality and whether it correctly implements this specification: <spec>${spec}</spec>. If you're satisfied, just return 'Looks great', nothing else. If not, return a review with a list of changes you'd like to see.`; | ||
const review = (await llm.complete({ prompt })).text; | ||
if (review.includes("Looks great")) { | ||
// the reviewer is satisfied with the code, let's return the review | ||
context.writeEventToStream( | ||
new MessageEvent({ | ||
msg: `Reviewer says: ${review}`, | ||
}), | ||
); | ||
return new StopEvent({ result: code }); | ||
} | ||
|
||
return new ReviewEvent({ review, code }); | ||
}; | ||
|
||
const codeAgent = new Workflow({ validate: true }); | ||
codeAgent.addStep(StartEvent, architect, { outputs: CodeEvent }); | ||
codeAgent.addStep(ReviewEvent, coder, { outputs: CodeEvent }); | ||
codeAgent.addStep(CodeEvent, reviewer, { outputs: ReviewEvent }); | ||
|
||
// Usage | ||
async function main() { | ||
const run = codeAgent.run(specification); | ||
for await (const event of codeAgent.streamEvents()) { | ||
const msg = (event as MessageEvent).data.msg; | ||
console.log(`${msg}\n`); | ||
} | ||
const result = await run; | ||
console.log("Final code:\n", result.data.result); | ||
} | ||
|
||
main().catch(console.error); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,70 @@ | ||
import { | ||
Context, | ||
StartEvent, | ||
StopEvent, | ||
Workflow, | ||
WorkflowEvent, | ||
} from "@llamaindex/core/workflow"; | ||
import { OpenAI } from "llamaindex"; | ||
|
||
// Create LLM instance | ||
const llm = new OpenAI(); | ||
|
||
// Create custom event types | ||
export class JokeEvent extends WorkflowEvent<{ joke: string }> {} | ||
export class CritiqueEvent extends WorkflowEvent<{ critique: string }> {} | ||
export class AnalysisEvent extends WorkflowEvent<{ analysis: string }> {} | ||
|
||
const generateJoke = async (_context: Context, ev: StartEvent) => { | ||
const prompt = `Write your best joke about ${ev.data.input}.`; | ||
const response = await llm.complete({ prompt }); | ||
return new JokeEvent({ joke: response.text }); | ||
}; | ||
|
||
const critiqueJoke = async (_context: Context, ev: JokeEvent) => { | ||
const prompt = `Give a thorough critique of the following joke: ${ev.data.joke}`; | ||
const response = await llm.complete({ prompt }); | ||
return new CritiqueEvent({ critique: response.text }); | ||
}; | ||
|
||
const analyzeJoke = async (_context: Context, ev: JokeEvent) => { | ||
const prompt = `Give a thorough analysis of the following joke: ${ev.data.joke}`; | ||
const response = await llm.complete({ prompt }); | ||
return new AnalysisEvent({ analysis: response.text }); | ||
}; | ||
|
||
const reportJoke = async ( | ||
context: Context, | ||
ev: AnalysisEvent | CritiqueEvent, | ||
) => { | ||
const events = context.collectEvents(ev, [AnalysisEvent, CritiqueEvent]); | ||
if (!events) { | ||
return; | ||
} | ||
const subPrompts = events.map((event) => { | ||
if (event instanceof AnalysisEvent) { | ||
return `Analysis: ${event.data.analysis}`; | ||
} else if (event instanceof CritiqueEvent) { | ||
return `Critique: ${event.data.critique}`; | ||
} | ||
return ""; | ||
}); | ||
|
||
const prompt = `Based on the following information about a joke:\n${subPrompts.join("\n")}\nProvide a comprehensive report on the joke's quality and impact.`; | ||
const response = await llm.complete({ prompt }); | ||
return new StopEvent({ result: response.text }); | ||
}; | ||
|
||
const jokeFlow = new Workflow(); | ||
jokeFlow.addStep(StartEvent, generateJoke); | ||
jokeFlow.addStep(JokeEvent, critiqueJoke); | ||
jokeFlow.addStep(JokeEvent, analyzeJoke); | ||
jokeFlow.addStep([AnalysisEvent, CritiqueEvent], reportJoke); | ||
|
||
// Usage | ||
async function main() { | ||
const result = await jokeFlow.run("pirates"); | ||
console.log(result.data.result); | ||
} | ||
|
||
main().catch(console.error); |
Oops, something went wrong.