-
Notifications
You must be signed in to change notification settings - Fork 189
resolve #714 deep research example broken due to breaking changes in @llamaindex/workflow-core 1.3.0 #715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…anges in @llamaindex/workflow-core 1.3.0
|
WalkthroughRefactors DeepResearch workflow to use Memory and StatefulContext, introduces DeepResearchState, updates event handlers (start, plan, research, report), adjusts prompts and createResearchPlan to read from Memory, revises UI/artifact streaming, and changes workflowFactory signature to accept chatBody with Message[]. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant WF as workflowFactory
participant DR as DeepResearch Workflow
participant Mem as Memory
participant Src as Retriever
participant LLM as LLM
participant UI as UI Stream
participant Art as Artifact Store
User->>WF: chatBody { messages }
WF->>DR: startAgentEvent
note right of DR: Initialize Memory / State
DR->>Mem: add(user message)
DR->>UI: retrieve-in-progress
DR->>Src: retrieve context nodes
Src-->>DR: nodes
DR->>UI: retrieve-done
DR->>DR: planResearchEvent
DR->>LLM: createResearchPlan(memory, state)
alt cancel
DR->>UI: analyze-done
DR-->>User: cancel stream event
else research (questions)
DR->>Mem: add(assistant questions)
loop per question
DR->>UI: research-pending(question)
DR->>LLM: answerQuestion(context, question)
LLM-->>DR: answer
DR->>Mem: add(assistant answer)
DR->>UI: research-progress(answer)
end
DR->>DR: planResearchEvent (iterate)
else no-further-ideas
DR->>Mem: add(assistant no-more-ideas)
DR->>UI: analyze-done
DR->>DR: reportEvent
end
DR->>LLM: stream final report (history from Memory)
LLM-->>DR: tokens
DR->>UI: agentStreamEvent(tokens)
DR->>Art: publish "DeepResearch Report"
DR-->>User: final result + completion
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (5)
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts (5)
179-186
: Seed Memory with prior chat history to preserve contextCurrent memory only includes the latest user input; prior turns are dropped.
const { userInput, chatHistory } = event.data; const { sendEvent, state } = context; if (!userInput) throw new Error("Invalid input"); + if (Array.isArray(chatHistory) && chatHistory.length) { + for (const m of chatHistory) { + // m.role from upstream typically: "user" | "assistant" | "system" + state.memory.add({ role: m.role as any, content: m.content as any }); + } + } state.memory.add({ role: "user", content: userInput }); state.userRequest = userInput;Please confirm the exact Message shape in AgentInputData so we can remove the
as any
casts.
396-400
: Fix typo in completion message- message: { role: "assistant", content: "the reseach is complete" }, + message: { role: "assistant", content: "The research is complete." },
453-463
: Polish prompt wordingMinor grammar/clarity improvements.
if (totalQuestions === 0) { - return "The student has no questions to research. Let start by providing some questions for the student to research."; + return "The student has no questions to research. Let's start by providing some questions for the student to research."; } if (totalQuestions >= MAX_QUESTIONS) { - return `The student has researched ${totalQuestions} questions. Should proceeding writing report or cancel the research if the answers are not enough to write a report.`; + return `The student has researched ${totalQuestions} questions. Proceed to write the report, or cancel the research if the answers are insufficient to write a report.`; }
131-132
: Tighten UI event description- "The type of event. DeepResearch has 3 main stages:\n1. retrieve: Retrieve the context from the vector store\n2. analyze: Analyze the context and generate a research questions to answer\n3. answer: Answer the provided questions. Each question has a unique id, when the state is done, the event will have the answer for the question." + "The type of event. DeepResearch has 3 stages:\n1. retrieve: Retrieve context from the vector store.\n2. analyze: Analyze the context and generate research questions.\n3. answer: Answer the questions. When a question is done, the event includes its answer."
79-103
: Unify citation format across prompts
researchPrompt
uses[citation:id]
whileWRITE_REPORT_PROMPT
mentions[citation:id]()
. Use one format to avoid confusing the model.-+ Preserve all citation syntax (the `[citation:id]()` parts in the provided context). Keep these citations in the final report - no separate reference section is needed. ++ Preserve all citation syntax (the `[citation:id]` parts in the provided context). Keep these citations in the final report — no separate reference section is needed.(Keep
researchPrompt
as-is with[citation:id]
.)Also applies to: 111-115
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
(6 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-06-30T10:19:29.893Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the workflow factory pattern for workflow creation, i.e., define `workflowFactory` as a function returning an agent instance, optionally async.
Applied to files:
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:19:29.893Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/src/app/workflow*.ts : Organize workflow files separately in development mode, e.g., `src/app/workflow.ts`.
Applied to files:
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:18:57.724Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/events.ts : Event system logic, including source, agent, and artifact events, as well as helper functions for converting LlamaIndex data to UI events, should be implemented in src/events.ts.
Applied to files:
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:19:29.893Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `llamaindex/workflow` with tool arrays for agent creation.
Applied to files:
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:19:29.893Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.
Applied to files:
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:20:25.875Z
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Workflow factory functions should accept a ChatRequest and return a Workflow instance, following the documented contract.
Applied to files:
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
🧬 Code graph analysis (1)
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts (3)
packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1)
workflowFactory
(55-337)packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1)
workflowFactory
(58-315)packages/create-llama/templates/components/use-cases/typescript/financial_report/src/app/workflow.ts (2)
workflowFactory
(28-53)getWorkflow
(68-318)
🪛 GitHub Actions: Lint on push or pull request
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
[warning] 1-1: Prettier formatting issues found in file. Run 'prettier --write' to fix.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, none)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, none)
- GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, llamacloud)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, none)
- GitHub Check: python (20, 3.11, windows-latest, fastapi, llamacloud)
- GitHub Check: typescript (22, ubuntu-22.04, nextjs, none)
- GitHub Check: typescript (22, windows-latest, nextjs, llamacloud)
- GitHub Check: python (20, 3.11, macos-latest, fastapi, llamacloud)
- GitHub Check: typescript (22, macos-latest, nextjs, none)
- GitHub Check: typescript (22, ubuntu-22.04, nextjs, llamacloud)
- GitHub Check: typescript (22, macos-latest, nextjs, llamacloud)
- GitHub Check: typescript (22, windows-latest, nextjs, none)
🔇 Additional comments (3)
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts (3)
162-170
: Validate Memory construction and intended API
new Memory([], {})
assumes this constructor shape and that.add()
/.get()
match your usage.If you're on llamaindex >= 0.3.x, confirm this with a quick check. If not, initialize via the recommended factory (e.g.,
Memory.fromMessages(...)
) or adapt to the new API.
363-375
: Confirmraw: stream
serializationSome event sinks JSON-serialize event payloads; passing the live stream object may fail.
If serialization issues arise, pass
raw: null
or a minimal metadata object instead of the stream instance.
31-34
: Factory signature change looks right; ensuregetIndex
accepts itYou now pass
chatBody
togetIndex
. Confirm its signature expects{ messages: Message[] }
or update accordingly.
@@ -1,15 +1,18 @@ | |||
import { artifactEvent, toSourceEvent } from "@llamaindex/server"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
❓ Verification inconclusive
Prettier issues in pipeline
Run the formatter to make CI green.
🏁 Script executed:
#!/bin/bash
pnpm -w prettier --write packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
Length of output: 228
Install missing Prettier plugin and re-run formatter
Prettier failed with “Cannot find package ‘prettier-plugin-organize-imports’.” Add it to the root devDependencies (pnpm add -D prettier-plugin-organize-imports
) and then rerun
pnpm -w prettier --write packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
🧰 Tools
🪛 GitHub Actions: Lint on push or pull request
[warning] 1-1: Prettier formatting issues found in file. Run 'prettier --write' to fix.
🤖 Prompt for AI Agents
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
lines 1-1: Prettier failed because the repo is missing the
prettier-plugin-organize-imports; add the plugin to the root devDependencies
(run pnpm -w add -D prettier-plugin-organize-imports or add it to the root
package.json devDependencies) and then re-run the formatter for this file with
pnpm -w prettier --write
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
so formatting completes successfully.
if (decision === "cancel") { | ||
sendEvent( | ||
uiEvent.with({ | ||
type: "ui_event", | ||
data: { event: "answer", state: "pending", id, question }, | ||
}), | ||
data: { event: "analyze", state: "done" }, | ||
}) | ||
); | ||
sendEvent(researchEvent.with({ questionId: id, question })); | ||
return agentStreamEvent.with({ | ||
delta: cancelReason ?? "Research cancelled without any reason.", | ||
response: cancelReason ?? "Research cancelled without any reason.", | ||
currentAgentName: "", | ||
raw: null, | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Ensure workflow terminates on cancel
Returning only an agentStreamEvent
leaves the workflow running. Send the stream message (optional) and then stop the agent.
if (decision === "cancel") {
sendEvent(
uiEvent.with({
type: "ui_event",
data: { event: "analyze", state: "done" },
})
);
- return agentStreamEvent.with({
- delta: cancelReason ?? "Research cancelled without any reason.",
- response: cancelReason ?? "Research cancelled without any reason.",
- currentAgentName: "",
- raw: null,
- });
+ const msg = cancelReason ?? "Research cancelled.";
+ sendEvent(
+ agentStreamEvent.with({
+ delta: msg,
+ response: msg,
+ currentAgentName: "",
+ raw: null,
+ }),
+ );
+ return stopAgentEvent.with({
+ result: msg,
+ message: { role: "assistant", content: msg },
+ });
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (decision === "cancel") { | |
sendEvent( | |
uiEvent.with({ | |
type: "ui_event", | |
data: { event: "answer", state: "pending", id, question }, | |
}), | |
data: { event: "analyze", state: "done" }, | |
}) | |
); | |
sendEvent(researchEvent.with({ questionId: id, question })); | |
return agentStreamEvent.with({ | |
delta: cancelReason ?? "Research cancelled without any reason.", | |
response: cancelReason ?? "Research cancelled without any reason.", | |
currentAgentName: "", | |
raw: null, | |
}); | |
} | |
if (decision === "cancel") { | |
sendEvent( | |
uiEvent.with({ | |
type: "ui_event", | |
data: { event: "analyze", state: "done" }, | |
}) | |
); | |
const msg = cancelReason ?? "Research cancelled."; | |
sendEvent( | |
agentStreamEvent.with({ | |
delta: msg, | |
response: msg, | |
currentAgentName: "", | |
raw: null, | |
}), | |
); | |
return stopAgentEvent.with({ | |
result: msg, | |
message: { role: "assistant", content: msg }, | |
}); | |
} |
🤖 Prompt for AI Agents
In
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
around lines 244 to 257, after sending the optional UI stream message for the
"cancel" decision you must terminate the running workflow instead of merely
returning an agentStreamEvent; send the stream message (as you already do) and
then call the agent stop method (e.g. agent.stop()) or the workflow termination
function available in this scope to halt execution before returning the final
agentStreamEvent so the workflow actually stops.
state.totalQuestions += researchQuestions.length; | ||
state.memory.add({ | ||
role: "assistant", | ||
content: | ||
"We need to find answers to the following questions:\n" + | ||
researchQuestions.join("\n"), | ||
}); | ||
researchQuestions.forEach(({ questionId: id, question }) => { | ||
sendEvent( | ||
uiEvent.with({ | ||
type: "ui_event", | ||
data: { event: "answer", state: "pending", id, question }, | ||
}) | ||
); | ||
sendEvent(researchEvent.with({ questionId: id, question })); | ||
}); | ||
const events = await stream | ||
.until( | ||
() => state.researchResults.length === researchQuestions.length | ||
) | ||
.toArray(); | ||
return planResearchEvent.with({}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix deadlock in plan loop and incorrect stringification of questions
- The wait predicate compares total accumulated results against only the current batch size, which can block forever after the first iteration.
researchQuestions.join("\n")
is joining objects, yielding "[object Object]".
Apply:
if (decision === "research" && researchQuestions.length > 0) {
state.totalQuestions += researchQuestions.length;
state.memory.add({
role: "assistant",
content:
- "We need to find answers to the following questions:\n" +
- researchQuestions.join("\n"),
+ "We need to find answers to the following questions:\n" +
+ researchQuestions.map((q) => `- ${q.question}`).join("\n"),
});
+ const baseline = state.researchResults.length;
researchQuestions.forEach(({ questionId: id, question }) => {
sendEvent(
uiEvent.with({
type: "ui_event",
data: { event: "answer", state: "pending", id, question },
})
);
sendEvent(researchEvent.with({ questionId: id, question }));
});
- const events = await stream
- .until(
- () => state.researchResults.length === researchQuestions.length
- )
- .toArray();
+ await stream
+ .until(
+ () =>
+ state.researchResults.length >=
+ baseline + researchQuestions.length
+ )
+ .toArray();
return planResearchEvent.with({});
}
Also applies to: 262-265
🤖 Prompt for AI Agents
In
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
around lines 259-281 (also apply to 262-265), the wait predicate and the
questions stringification are wrong: change the stream.until predicate to
compare the accumulated results against the cumulative total (e.g. use
state.researchResults.length >= state.totalQuestions) instead of the current
batch size so the loop cannot deadlock, and build the joined question string
from the question text (e.g. researchQuestions.map(q => q.question).join("\n"))
instead of joining the objects directly so you don’t get "[object Object]".
workflow.handle( | ||
[researchEvent], | ||
async ( | ||
context: StatefulContext<DeepResearchState, WorkflowContext>, | ||
event | ||
) => { | ||
const { sendEvent, state } = context; | ||
const { questionId, question } = event.data; | ||
|
||
const answer = await answerQuestion( | ||
contextStr(state.contextNodes), | ||
question, | ||
); | ||
state.researchResults.push({ questionId, question, answer }); | ||
|
||
state.memory.put({ | ||
role: "assistant", | ||
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`, | ||
}); | ||
|
||
sendEvent( | ||
uiEvent.with({ | ||
type: "ui_event", | ||
data: { | ||
event: "answer", | ||
state: "done", | ||
id: questionId, | ||
question, | ||
answer, | ||
}, | ||
}), | ||
); | ||
}); | ||
sendEvent( | ||
uiEvent.with({ | ||
type: "ui_event", | ||
data: { | ||
event: "answer", | ||
state: "inprogress", | ||
id: questionId, | ||
question, | ||
}, | ||
}) | ||
); | ||
|
||
const answer = await answerQuestion( | ||
contextStr(state.contextNodes), | ||
question | ||
); | ||
state.researchResults.push({ questionId, question, answer }); | ||
|
||
state.memory.add({ | ||
role: "assistant", | ||
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`, | ||
}); | ||
|
||
workflow.handle([reportEvent], async ({ data }) => { | ||
const { sendEvent, state } = getContext(); | ||
const chatHistory = await state.memory.getAllMessages(); | ||
const messages = chatHistory.concat([ | ||
{ | ||
role: "system", | ||
content: WRITE_REPORT_PROMPT, | ||
}, | ||
{ | ||
role: "user", | ||
content: | ||
"Write a report addressing the user request based on the research provided the context", | ||
}, | ||
]); | ||
|
||
const stream = await Settings.llm.chat({ messages, stream: true }); | ||
let response = ""; | ||
for await (const chunk of stream) { | ||
response += chunk.delta; | ||
sendEvent( | ||
agentStreamEvent.with({ | ||
delta: chunk.delta, | ||
response, | ||
currentAgentName: "", | ||
raw: stream, | ||
}), | ||
uiEvent.with({ | ||
type: "ui_event", | ||
data: { | ||
event: "answer", | ||
state: "done", | ||
id: questionId, | ||
question, | ||
answer, | ||
}, | ||
}) | ||
); | ||
} | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling so a failed answer doesn’t stall the plan loop
Unhandled errors in answerQuestion
will prevent researchResults
from advancing, hanging the stream.until(...)
.
- const answer = await answerQuestion(
- contextStr(state.contextNodes),
- question
- );
- state.researchResults.push({ questionId, question, answer });
-
- state.memory.add({
- role: "assistant",
- content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
- });
-
- sendEvent(
- uiEvent.with({
- type: "ui_event",
- data: {
- event: "answer",
- state: "done",
- id: questionId,
- question,
- answer,
- },
- })
- );
+ let answer: string;
+ let errored = false;
+ try {
+ answer = await answerQuestion(
+ contextStr(state.contextNodes),
+ question
+ );
+ } catch (err) {
+ errored = true;
+ answer = `Error: ${err instanceof Error ? err.message : String(err)}`;
+ }
+ state.researchResults.push({ questionId, question, answer });
+
+ state.memory.add({
+ role: "assistant",
+ content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
+ });
+
+ sendEvent(
+ uiEvent.with({
+ type: "ui_event",
+ data: {
+ event: "answer",
+ state: errored ? "error" : "done",
+ id: questionId,
+ question,
+ ...(errored ? {} : { answer }),
+ },
+ })
+ );
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
workflow.handle( | |
[researchEvent], | |
async ( | |
context: StatefulContext<DeepResearchState, WorkflowContext>, | |
event | |
) => { | |
const { sendEvent, state } = context; | |
const { questionId, question } = event.data; | |
const answer = await answerQuestion( | |
contextStr(state.contextNodes), | |
question, | |
); | |
state.researchResults.push({ questionId, question, answer }); | |
state.memory.put({ | |
role: "assistant", | |
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`, | |
}); | |
sendEvent( | |
uiEvent.with({ | |
type: "ui_event", | |
data: { | |
event: "answer", | |
state: "done", | |
id: questionId, | |
question, | |
answer, | |
}, | |
}), | |
); | |
}); | |
sendEvent( | |
uiEvent.with({ | |
type: "ui_event", | |
data: { | |
event: "answer", | |
state: "inprogress", | |
id: questionId, | |
question, | |
}, | |
}) | |
); | |
const answer = await answerQuestion( | |
contextStr(state.contextNodes), | |
question | |
); | |
state.researchResults.push({ questionId, question, answer }); | |
state.memory.add({ | |
role: "assistant", | |
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`, | |
}); | |
workflow.handle([reportEvent], async ({ data }) => { | |
const { sendEvent, state } = getContext(); | |
const chatHistory = await state.memory.getAllMessages(); | |
const messages = chatHistory.concat([ | |
{ | |
role: "system", | |
content: WRITE_REPORT_PROMPT, | |
}, | |
{ | |
role: "user", | |
content: | |
"Write a report addressing the user request based on the research provided the context", | |
}, | |
]); | |
const stream = await Settings.llm.chat({ messages, stream: true }); | |
let response = ""; | |
for await (const chunk of stream) { | |
response += chunk.delta; | |
sendEvent( | |
agentStreamEvent.with({ | |
delta: chunk.delta, | |
response, | |
currentAgentName: "", | |
raw: stream, | |
}), | |
uiEvent.with({ | |
type: "ui_event", | |
data: { | |
event: "answer", | |
state: "done", | |
id: questionId, | |
question, | |
answer, | |
}, | |
}) | |
); | |
} | |
); | |
workflow.handle( | |
[researchEvent], | |
async ( | |
context: StatefulContext<DeepResearchState, WorkflowContext>, | |
event | |
) => { | |
const { sendEvent, state } = context; | |
const { questionId, question } = event.data; | |
sendEvent( | |
uiEvent.with({ | |
type: "ui_event", | |
data: { | |
event: "answer", | |
state: "inprogress", | |
id: questionId, | |
question, | |
}, | |
}) | |
); | |
let answer: string; | |
let errored = false; | |
try { | |
answer = await answerQuestion( | |
contextStr(state.contextNodes), | |
question | |
); | |
} catch (err) { | |
errored = true; | |
answer = `Error: ${err instanceof Error ? err.message : String(err)}`; | |
} | |
state.researchResults.push({ questionId, question, answer }); | |
state.memory.add({ | |
role: "assistant", | |
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`, | |
}); | |
sendEvent( | |
uiEvent.with({ | |
type: "ui_event", | |
data: { | |
event: "answer", | |
state: errored ? "error" : "done", | |
id: questionId, | |
question, | |
...(errored ? {} : { answer }), | |
}, | |
}) | |
); | |
} | |
); |
🤖 Prompt for AI Agents
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
lines 296-341: wrap the async call to answerQuestion in a try/catch so any
thrown error doesn’t halt the workflow; on catch, push a failure entry into
state.researchResults (include questionId, question and an error message/flag),
send a ui_event with state "failed" (include the id, question and error
message), and avoid rethrowing so the plan loop can continue; keep the success
path unchanged.
…@llamaindex/workflow-core 1.3.0
Summary by CodeRabbit
New Features
Improvements
Refactor