-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding support for a new runnable for InlineAgents #340
base: main
Are you sure you want to change the base?
Adding support for a new runnable for InlineAgents #340
Conversation
10ad334
to
5f9ad6c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good start, I really like how inline agents fits the interfaces better than the create agent/invoke agent pattern. Left a comment on the interface itself. We should adhere to the messages input/output contract similar to how the chat models do it.
prompt_override_configuration: Optional[Dict] | ||
inline_session_state: Optional[Dict] | ||
|
||
class BedrockInlineAgentsRunnable(RunnableSerializable[Dict, OutputType]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The contract seems to deviate from how other Runnables are implemented. Notice here in the Anthropic Chat Model that they use message as input and output: https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/chat_models.py#L127
If we support message in and message out then this will easily plugin to LCEL, LG and other constructs they have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this better than the state based approach. Left a comment to remove the older executor based approach.
get_boto_session, | ||
) | ||
|
||
class BedrockInlineAgentsChatModel(BaseChatModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not have both BedrockInlineAgentRunnable and BedrockInlineAgentsChatModel. Remove the state based BedrockInlineAgentRunnable implementation and rename BedrockInlineAgentsChatModel to BedrockInlineAgentRunnable. I like how the new implementation supports the standardized message interface and will integrate with LangGraph better.
Please update the inline_agent_langgraph notebook to use this implementation instead of the current state based approach. I would like use to completely remove "should_continue" just like here in this sample they have no should_continue: https://langchain-ai.github.io/langgraph/how-tos/create-react-agent/#usage
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, merged the two interfaces and added a langgraph integration notebook.
We'd still need the custom parsers for running the graph because of the way RoC inputs are passed to the agent. We can investigate more on how to handle this natively within the runnable itself in next iterations.
response_text = json.dumps(event) | ||
break | ||
|
||
if "chunk" in event: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Be sure to add the files to the output, see https://github.com/jdbaker01/langchain-aws/blob/main/libs/aws/langchain_aws/agents/base.py#L76 and https://github.com/jdbaker01/langchain-aws/blob/main/libs/aws/langchain_aws/agents/base.py#L82. Code Interpreter outputs files as well as the response.
@@ -609,3 +311,219 @@ def _parse_intermediate_steps( | |||
return session_id, session_state | |||
|
|||
return None, None | |||
|
|||
class BedrockInlineAgentsRunnable(BaseChatModel): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What was the motivation behind using the ChatModel
as base rather than a Runnable/RunnableSerializable?
ChatModel have specific mechanisms, for example bind_tools
to work with tools, that this implementation will bypass, so this might not be the right interface for the agents.
session_id = response["sessionId"] | ||
trace_log_elements = [] | ||
files = [] | ||
for event in event_stream: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we have some sort of signal for unknown event types so consumers can more easily identify service model changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently this is not a type within the response, will investigate more and update this in a later PR
) | ||
] | ||
except Exception as ex: | ||
raise Exception("Parse exception encountered {}".format(repr(ex))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there an exception hierarchy that we can add onto here? Exception
is broad.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unsure of this one, keeping it broad to encompass all validation + serviceUnavailable exceptions.
39caa5e
to
61ac9a0
Compare
@divekarshubham |
Description
This PR introduces a new Bedrock Inline-Agents Runnable that allows using InvokeInlineAgent API with return of control functions as tools and configurable agentic properties.
Usage
See Jupyter notebook for usage with
AgentExecutor
Closes #309