A big, big restructure is coming! #7
nicksolarsoul
announced in
Announcements
Replies: 1 comment 2 replies
-
|
About 90% of the way there now, a true agentic system is coming allowing you do drop in additional LLM models that autonomously collaborate with a "supervisor". Watch this space! |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We are just 1 star away from 250 - what a week it has been since release!
One big learning to take away from this is that a multi-agentic approach is required in order to maximize the usefulness of the research. Already, I re-coded the project in CrewAI, but encountered too many limitations, so then I switched to AG2 (AutoGen) but encountered too many bugs.
One thing that became quite apparent is the state of multi-agent frameworks and their reliability. Often, I would encounter too many bugs, and the restrictions of the framework meant I could not accommodate some necessary fallbacks.
So, I've opted for a multi-agent approach with pure LangChain and using LangGraph, creating an orchestration method similar to here: https://github.com/langchain-ai/langgraph/blob/main/docs/docs/tutorials/multi_agent/hierarchical_agent_teams.ipynb
This will bring its own challenges, but due to plenty of ongoing LLM advancements like 2.5 Flash being both cheap and smart to make decisions, I feel I can quickly overcome them as they arise. LangChain/LangGraph comes with the maturity and flexibility to handle the situations when the framework doesn't deliver out-the-box.
With this in mind, I can create more roles and scale the platform in different ways, teaching it to be critical of its own research, fact-checking, verifying sources, and more through the use of agents with laser-focussed tasks on doing this.
Most of the research I ran checked out great, but there was one (in probably about 50) of the uses I ran that mis-attributed a source link. This is unacceptable to the perfectionist in me!
So hold off on the PRs - big change will be here in this week coming.
Of course, I want to keep it intuitive with ability to drop-in/drop-out multiple LLMs, and have it highly configurable to choose the depth of the research, enable/disable fact-checking, etc depending on preferences to either favor token efficiency or accuracy.
Beta Was this translation helpful? Give feedback.
All reactions