Skip to content

Conversation

pjamessteven
Copy link

@pjamessteven pjamessteven commented Sep 4, 2025

…@llamaindex/workflow-core 1.3.0

Summary by CodeRabbit

  • New Features

    • Step-by-step progress updates (retrieve, analyze, research) with streaming responses.
    • Iterative research planning with follow-up questions and context-aware answers.
    • Final “DeepResearch Report” artifact generation and completion messaging.
    • User-visible cancellation flow with clear status updates.
  • Improvements

    • More consistent conversation continuity from prior messages.
    • Clearer UI event messaging and smoother streaming experience.
  • Refactor

    • Major workflow rearchitecture for stability and scalability.
    • Updated request format to accept a chat-style messages payload.

…anges in @llamaindex/workflow-core 1.3.0
Copy link

changeset-bot bot commented Sep 4, 2025

⚠️ No Changeset found

Latest commit: 2678502

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

Copy link

coderabbitai bot commented Sep 4, 2025

Walkthrough

Refactors DeepResearch workflow to use Memory and StatefulContext, introduces DeepResearchState, updates event handlers (start, plan, research, report), adjusts prompts and createResearchPlan to read from Memory, revises UI/artifact streaming, and changes workflowFactory signature to accept chatBody with Message[].

Changes

Cohort / File(s) Summary
DeepResearch workflow refactor
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
Replaced ChatMemoryBuffer with Memory; adopted StatefulContext and typed DeepResearchState; reworked startAgentEvent, planResearchEvent (iterative/cancel/finish branches), researchEvent, and reportEvent; updated prompts and createResearchPlan to read from Memory; standardized UI/artifact streaming; changed workflowFactory signature to (chatBody: { messages: Message[] }).

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant WF as workflowFactory
  participant DR as DeepResearch Workflow
  participant Mem as Memory
  participant Src as Retriever
  participant LLM as LLM
  participant UI as UI Stream
  participant Art as Artifact Store

  User->>WF: chatBody { messages }
  WF->>DR: startAgentEvent
  note right of DR: Initialize Memory / State

  DR->>Mem: add(user message)
  DR->>UI: retrieve-in-progress
  DR->>Src: retrieve context nodes
  Src-->>DR: nodes
  DR->>UI: retrieve-done
  DR->>DR: planResearchEvent

  DR->>LLM: createResearchPlan(memory, state)
  alt cancel
    DR->>UI: analyze-done
    DR-->>User: cancel stream event
  else research (questions)
    DR->>Mem: add(assistant questions)
    loop per question
      DR->>UI: research-pending(question)
      DR->>LLM: answerQuestion(context, question)
      LLM-->>DR: answer
      DR->>Mem: add(assistant answer)
      DR->>UI: research-progress(answer)
    end
    DR->>DR: planResearchEvent (iterate)
  else no-further-ideas
    DR->>Mem: add(assistant no-more-ideas)
    DR->>UI: analyze-done
    DR->>DR: reportEvent
  end

  DR->>LLM: stream final report (history from Memory)
  LLM-->>DR: tokens
  DR->>UI: agentStreamEvent(tokens)
  DR->>Art: publish "DeepResearch Report"
  DR-->>User: final result + completion
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

Possibly related PRs

Suggested reviewers

  • thucpn
  • marcusschiesser

Poem

Hoppity-hop through memory lanes,
I stash each thought like springtime grains.
Plans loop twice, then answers flow—
Carrots of context help me grow. 🥕
Report in paw, I thump with cheer:
“Deep research done—the path is clear!”

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (5)
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts (5)

179-186: Seed Memory with prior chat history to preserve context

Current memory only includes the latest user input; prior turns are dropped.

       const { userInput, chatHistory } = event.data;
       const { sendEvent, state } = context;

       if (!userInput) throw new Error("Invalid input");
+      if (Array.isArray(chatHistory) && chatHistory.length) {
+        for (const m of chatHistory) {
+          // m.role from upstream typically: "user" | "assistant" | "system"
+          state.memory.add({ role: m.role as any, content: m.content as any });
+        }
+      }

       state.memory.add({ role: "user", content: userInput });
       state.userRequest = userInput;

Please confirm the exact Message shape in AgentInputData so we can remove the as any casts.


396-400: Fix typo in completion message

-        message: { role: "assistant", content: "the reseach is complete" },
+        message: { role: "assistant", content: "The research is complete." },

453-463: Polish prompt wording

Minor grammar/clarity improvements.

   if (totalQuestions === 0) {
-    return "The student has no questions to research. Let start by providing some questions for the student to research.";
+    return "The student has no questions to research. Let's start by providing some questions for the student to research.";
   }

   if (totalQuestions >= MAX_QUESTIONS) {
-    return `The student has researched ${totalQuestions} questions. Should proceeding writing report or cancel the research if the answers are not enough to write a report.`;
+    return `The student has researched ${totalQuestions} questions. Proceed to write the report, or cancel the research if the answers are insufficient to write a report.`;
   }

131-132: Tighten UI event description

-        "The type of event. DeepResearch has 3 main stages:\n1. retrieve: Retrieve the context from the vector store\n2. analyze: Analyze the context and generate a research questions to answer\n3. answer: Answer the provided questions. Each question has a unique id, when the state is done, the event will have the answer for the question."
+        "The type of event. DeepResearch has 3 stages:\n1. retrieve: Retrieve context from the vector store.\n2. analyze: Analyze the context and generate research questions.\n3. answer: Answer the questions. When a question is done, the event includes its answer."

79-103: Unify citation format across prompts

researchPrompt uses [citation:id] while WRITE_REPORT_PROMPT mentions [citation:id](). Use one format to avoid confusing the model.

-+ Preserve all citation syntax (the `[citation:id]()` parts in the provided context). Keep these citations in the final report - no separate reference section is needed.
++ Preserve all citation syntax (the `[citation:id]` parts in the provided context). Keep these citations in the final report — no separate reference section is needed.

(Keep researchPrompt as-is with [citation:id].)

Also applies to: 111-115

📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 97a7d9b and 2678502.

📒 Files selected for processing (1)
  • packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts (6 hunks)
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-06-30T10:19:29.893Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the workflow factory pattern for workflow creation, i.e., define `workflowFactory` as a function returning an agent instance, optionally async.

Applied to files:

  • packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:19:29.893Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/src/app/workflow*.ts : Organize workflow files separately in development mode, e.g., `src/app/workflow.ts`.

Applied to files:

  • packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:18:57.724Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:18:57.724Z
Learning: Applies to packages/server/src/events.ts : Event system logic, including source, agent, and artifact events, as well as helper functions for converting LlamaIndex data to UI events, should be implemented in src/events.ts.

Applied to files:

  • packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:19:29.893Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/**/*.ts : Use the `agent()` function from `llamaindex/workflow` with tool arrays for agent creation.

Applied to files:

  • packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:19:29.893Z
Learnt from: CR
PR: run-llama/create-llama#0
File: packages/server/examples/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:19:29.893Z
Learning: Applies to packages/server/examples/{simple-workflow/calculator.ts,agentic-rag/index.ts,custom-layout/index.ts,devmode/index.ts,src/app/workflow.ts} : Use the standard server setup pattern: instantiate `LlamaIndexServer` with `workflow`, `uiConfig`, and `port`, then call `.start()`.

Applied to files:

  • packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
📚 Learning: 2025-06-30T10:20:25.875Z
Learnt from: CR
PR: run-llama/create-llama#0
File: python/llama-index-server/CLAUDE.md:0-0
Timestamp: 2025-06-30T10:20:25.875Z
Learning: Workflow factory functions should accept a ChatRequest and return a Workflow instance, following the documented contract.

Applied to files:

  • packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
🧬 Code graph analysis (1)
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts (3)
packages/create-llama/templates/components/use-cases/typescript/code_generator/src/app/workflow.ts (1)
  • workflowFactory (55-337)
packages/create-llama/templates/components/use-cases/typescript/document_generator/src/app/workflow.ts (1)
  • workflowFactory (58-315)
packages/create-llama/templates/components/use-cases/typescript/financial_report/src/app/workflow.ts (2)
  • workflowFactory (28-53)
  • getWorkflow (68-318)
🪛 GitHub Actions: Lint on push or pull request
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts

[warning] 1-1: Prettier formatting issues found in file. Run 'prettier --write' to fix.

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, none)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, none)
  • GitHub Check: python (20, 3.11, ubuntu-22.04, fastapi, llamacloud)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, none)
  • GitHub Check: python (20, 3.11, windows-latest, fastapi, llamacloud)
  • GitHub Check: typescript (22, ubuntu-22.04, nextjs, none)
  • GitHub Check: typescript (22, windows-latest, nextjs, llamacloud)
  • GitHub Check: python (20, 3.11, macos-latest, fastapi, llamacloud)
  • GitHub Check: typescript (22, macos-latest, nextjs, none)
  • GitHub Check: typescript (22, ubuntu-22.04, nextjs, llamacloud)
  • GitHub Check: typescript (22, macos-latest, nextjs, llamacloud)
  • GitHub Check: typescript (22, windows-latest, nextjs, none)
🔇 Additional comments (3)
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts (3)

162-170: Validate Memory construction and intended API

new Memory([], {}) assumes this constructor shape and that .add()/.get() match your usage.

If you're on llamaindex >= 0.3.x, confirm this with a quick check. If not, initialize via the recommended factory (e.g., Memory.fromMessages(...)) or adapt to the new API.


363-375: Confirm raw: stream serialization

Some event sinks JSON-serialize event payloads; passing the live stream object may fail.

If serialization issues arise, pass raw: null or a minimal metadata object instead of the stream instance.


31-34: Factory signature change looks right; ensure getIndex accepts it

You now pass chatBody to getIndex. Confirm its signature expects { messages: Message[] } or update accordingly.

@@ -1,15 +1,18 @@
import { artifactEvent, toSourceEvent } from "@llamaindex/server";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

❓ Verification inconclusive

Prettier issues in pipeline

Run the formatter to make CI green.


🏁 Script executed:

#!/bin/bash
pnpm -w prettier --write packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts

Length of output: 228


Install missing Prettier plugin and re-run formatter
Prettier failed with “Cannot find package ‘prettier-plugin-organize-imports’.” Add it to the root devDependencies (pnpm add -D prettier-plugin-organize-imports) and then rerun

pnpm -w prettier --write packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
🧰 Tools
🪛 GitHub Actions: Lint on push or pull request

[warning] 1-1: Prettier formatting issues found in file. Run 'prettier --write' to fix.

🤖 Prompt for AI Agents
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
lines 1-1: Prettier failed because the repo is missing the
prettier-plugin-organize-imports; add the plugin to the root devDependencies
(run pnpm -w add -D prettier-plugin-organize-imports or add it to the root
package.json devDependencies) and then re-run the formatter for this file with
pnpm -w prettier --write
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
so formatting completes successfully.

Comment on lines +244 to +257
if (decision === "cancel") {
sendEvent(
uiEvent.with({
type: "ui_event",
data: { event: "answer", state: "pending", id, question },
}),
data: { event: "analyze", state: "done" },
})
);
sendEvent(researchEvent.with({ questionId: id, question }));
return agentStreamEvent.with({
delta: cancelReason ?? "Research cancelled without any reason.",
response: cancelReason ?? "Research cancelled without any reason.",
currentAgentName: "",
raw: null,
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Ensure workflow terminates on cancel

Returning only an agentStreamEvent leaves the workflow running. Send the stream message (optional) and then stop the agent.

       if (decision === "cancel") {
         sendEvent(
           uiEvent.with({
             type: "ui_event",
             data: { event: "analyze", state: "done" },
           })
         );
-        return agentStreamEvent.with({
-          delta: cancelReason ?? "Research cancelled without any reason.",
-          response: cancelReason ?? "Research cancelled without any reason.",
-          currentAgentName: "",
-          raw: null,
-        });
+        const msg = cancelReason ?? "Research cancelled.";
+        sendEvent(
+          agentStreamEvent.with({
+            delta: msg,
+            response: msg,
+            currentAgentName: "",
+            raw: null,
+          }),
+        );
+        return stopAgentEvent.with({
+          result: msg,
+          message: { role: "assistant", content: msg },
+        });
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (decision === "cancel") {
sendEvent(
uiEvent.with({
type: "ui_event",
data: { event: "answer", state: "pending", id, question },
}),
data: { event: "analyze", state: "done" },
})
);
sendEvent(researchEvent.with({ questionId: id, question }));
return agentStreamEvent.with({
delta: cancelReason ?? "Research cancelled without any reason.",
response: cancelReason ?? "Research cancelled without any reason.",
currentAgentName: "",
raw: null,
});
}
if (decision === "cancel") {
sendEvent(
uiEvent.with({
type: "ui_event",
data: { event: "analyze", state: "done" },
})
);
const msg = cancelReason ?? "Research cancelled.";
sendEvent(
agentStreamEvent.with({
delta: msg,
response: msg,
currentAgentName: "",
raw: null,
}),
);
return stopAgentEvent.with({
result: msg,
message: { role: "assistant", content: msg },
});
}
🤖 Prompt for AI Agents
In
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
around lines 244 to 257, after sending the optional UI stream message for the
"cancel" decision you must terminate the running workflow instead of merely
returning an agentStreamEvent; send the stream message (as you already do) and
then call the agent stop method (e.g. agent.stop()) or the workflow termination
function available in this scope to halt execution before returning the final
agentStreamEvent so the workflow actually stops.

Comment on lines +259 to +281
state.totalQuestions += researchQuestions.length;
state.memory.add({
role: "assistant",
content:
"We need to find answers to the following questions:\n" +
researchQuestions.join("\n"),
});
researchQuestions.forEach(({ questionId: id, question }) => {
sendEvent(
uiEvent.with({
type: "ui_event",
data: { event: "answer", state: "pending", id, question },
})
);
sendEvent(researchEvent.with({ questionId: id, question }));
});
const events = await stream
.until(
() => state.researchResults.length === researchQuestions.length
)
.toArray();
return planResearchEvent.with({});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix deadlock in plan loop and incorrect stringification of questions

  • The wait predicate compares total accumulated results against only the current batch size, which can block forever after the first iteration.
  • researchQuestions.join("\n") is joining objects, yielding "[object Object]".

Apply:

       if (decision === "research" && researchQuestions.length > 0) {
         state.totalQuestions += researchQuestions.length;
         state.memory.add({
           role: "assistant",
           content:
-            "We need to find answers to the following questions:\n" +
-            researchQuestions.join("\n"),
+            "We need to find answers to the following questions:\n" +
+            researchQuestions.map((q) => `- ${q.question}`).join("\n"),
         });
+        const baseline = state.researchResults.length;
         researchQuestions.forEach(({ questionId: id, question }) => {
           sendEvent(
             uiEvent.with({
               type: "ui_event",
               data: { event: "answer", state: "pending", id, question },
             })
           );
           sendEvent(researchEvent.with({ questionId: id, question }));
         });
-        const events = await stream
-          .until(
-            () => state.researchResults.length === researchQuestions.length
-          )
-          .toArray();
+        await stream
+          .until(
+            () =>
+              state.researchResults.length >=
+              baseline + researchQuestions.length
+          )
+          .toArray();
         return planResearchEvent.with({});
       }

Also applies to: 262-265

🤖 Prompt for AI Agents
In
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
around lines 259-281 (also apply to 262-265), the wait predicate and the
questions stringification are wrong: change the stream.until predicate to
compare the accumulated results against the cumulative total (e.g. use
state.researchResults.length >= state.totalQuestions) instead of the current
batch size so the loop cannot deadlock, and build the joined question string
from the question text (e.g. researchQuestions.map(q => q.question).join("\n"))
instead of joining the objects directly so you don’t get "[object Object]".

Comment on lines +296 to +341
workflow.handle(
[researchEvent],
async (
context: StatefulContext<DeepResearchState, WorkflowContext>,
event
) => {
const { sendEvent, state } = context;
const { questionId, question } = event.data;

const answer = await answerQuestion(
contextStr(state.contextNodes),
question,
);
state.researchResults.push({ questionId, question, answer });

state.memory.put({
role: "assistant",
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
});

sendEvent(
uiEvent.with({
type: "ui_event",
data: {
event: "answer",
state: "done",
id: questionId,
question,
answer,
},
}),
);
});
sendEvent(
uiEvent.with({
type: "ui_event",
data: {
event: "answer",
state: "inprogress",
id: questionId,
question,
},
})
);

const answer = await answerQuestion(
contextStr(state.contextNodes),
question
);
state.researchResults.push({ questionId, question, answer });

state.memory.add({
role: "assistant",
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
});

workflow.handle([reportEvent], async ({ data }) => {
const { sendEvent, state } = getContext();
const chatHistory = await state.memory.getAllMessages();
const messages = chatHistory.concat([
{
role: "system",
content: WRITE_REPORT_PROMPT,
},
{
role: "user",
content:
"Write a report addressing the user request based on the research provided the context",
},
]);

const stream = await Settings.llm.chat({ messages, stream: true });
let response = "";
for await (const chunk of stream) {
response += chunk.delta;
sendEvent(
agentStreamEvent.with({
delta: chunk.delta,
response,
currentAgentName: "",
raw: stream,
}),
uiEvent.with({
type: "ui_event",
data: {
event: "answer",
state: "done",
id: questionId,
question,
answer,
},
})
);
}
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add error handling so a failed answer doesn’t stall the plan loop

Unhandled errors in answerQuestion will prevent researchResults from advancing, hanging the stream.until(...).

-      const answer = await answerQuestion(
-        contextStr(state.contextNodes),
-        question
-      );
-      state.researchResults.push({ questionId, question, answer });
-
-      state.memory.add({
-        role: "assistant",
-        content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
-      });
-
-      sendEvent(
-        uiEvent.with({
-          type: "ui_event",
-          data: {
-            event: "answer",
-            state: "done",
-            id: questionId,
-            question,
-            answer,
-          },
-        })
-      );
+      let answer: string;
+      let errored = false;
+      try {
+        answer = await answerQuestion(
+          contextStr(state.contextNodes),
+          question
+        );
+      } catch (err) {
+        errored = true;
+        answer = `Error: ${err instanceof Error ? err.message : String(err)}`;
+      }
+      state.researchResults.push({ questionId, question, answer });
+
+      state.memory.add({
+        role: "assistant",
+        content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
+      });
+
+      sendEvent(
+        uiEvent.with({
+          type: "ui_event",
+          data: {
+            event: "answer",
+            state: errored ? "error" : "done",
+            id: questionId,
+            question,
+            ...(errored ? {} : { answer }),
+          },
+        })
+      );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
workflow.handle(
[researchEvent],
async (
context: StatefulContext<DeepResearchState, WorkflowContext>,
event
) => {
const { sendEvent, state } = context;
const { questionId, question } = event.data;
const answer = await answerQuestion(
contextStr(state.contextNodes),
question,
);
state.researchResults.push({ questionId, question, answer });
state.memory.put({
role: "assistant",
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
});
sendEvent(
uiEvent.with({
type: "ui_event",
data: {
event: "answer",
state: "done",
id: questionId,
question,
answer,
},
}),
);
});
sendEvent(
uiEvent.with({
type: "ui_event",
data: {
event: "answer",
state: "inprogress",
id: questionId,
question,
},
})
);
const answer = await answerQuestion(
contextStr(state.contextNodes),
question
);
state.researchResults.push({ questionId, question, answer });
state.memory.add({
role: "assistant",
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
});
workflow.handle([reportEvent], async ({ data }) => {
const { sendEvent, state } = getContext();
const chatHistory = await state.memory.getAllMessages();
const messages = chatHistory.concat([
{
role: "system",
content: WRITE_REPORT_PROMPT,
},
{
role: "user",
content:
"Write a report addressing the user request based on the research provided the context",
},
]);
const stream = await Settings.llm.chat({ messages, stream: true });
let response = "";
for await (const chunk of stream) {
response += chunk.delta;
sendEvent(
agentStreamEvent.with({
delta: chunk.delta,
response,
currentAgentName: "",
raw: stream,
}),
uiEvent.with({
type: "ui_event",
data: {
event: "answer",
state: "done",
id: questionId,
question,
answer,
},
})
);
}
);
workflow.handle(
[researchEvent],
async (
context: StatefulContext<DeepResearchState, WorkflowContext>,
event
) => {
const { sendEvent, state } = context;
const { questionId, question } = event.data;
sendEvent(
uiEvent.with({
type: "ui_event",
data: {
event: "answer",
state: "inprogress",
id: questionId,
question,
},
})
);
let answer: string;
let errored = false;
try {
answer = await answerQuestion(
contextStr(state.contextNodes),
question
);
} catch (err) {
errored = true;
answer = `Error: ${err instanceof Error ? err.message : String(err)}`;
}
state.researchResults.push({ questionId, question, answer });
state.memory.add({
role: "assistant",
content: `<Question>${question}</Question>\n<Answer>${answer}</Answer>`,
});
sendEvent(
uiEvent.with({
type: "ui_event",
data: {
event: "answer",
state: errored ? "error" : "done",
id: questionId,
question,
...(errored ? {} : { answer }),
},
})
);
}
);
🤖 Prompt for AI Agents
packages/create-llama/templates/components/use-cases/typescript/deep_research/src/app/workflow.ts
lines 296-341: wrap the async call to answerQuestion in a try/catch so any
thrown error doesn’t halt the workflow; on catch, push a failure entry into
state.researchResults (include questionId, question and an error message/flag),
send a ui_event with state "failed" (include the id, question and error
message), and avoid rethrowing so the plan loop can continue; keep the success
path unchanged.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant