You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
A2UI provides a clean separation between UI structure and application state through the updateDataModel message.
This model works well for typical UI updates. However, when building AI-native applications (chat interfaces, report generation, document synthesis), developers frequently stream large LLM outputs into a single data model field.
Modern LLM responses may reach:
tens of thousands of tokens
hundreds of KB of text
continuously streamed outputs
Problem
According to the current v0.9 specification, updateDataModel replaces the value at the specified path:
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Background
A2UI provides a clean separation between UI structure and application state through the
updateDataModelmessage.This model works well for typical UI updates. However, when building AI-native applications (chat interfaces, report generation, document synthesis), developers frequently stream large LLM outputs into a single data model field.
Modern LLM responses may reach:
Problem
According to the current v0.9 specification,
updateDataModelreplaces the value at the specified path:{ "version": "v0.9", "updateDataModel": { "surfaceId": "chat", "path": "/messages/0/content", "value": "Hello world" } }During streaming generation, servers typically accumulate tokens:
and repeatedly send updates:
{ "version": "v0.9", "updateDataModel": { "surfaceId": "chat", "path": "/messages/0/content", "value": "Hello" } }This causes the transmitted payload size to grow continuously as content becomes larger.
For long responses, total network usage grows approximately O(n²).
Observation
The current protocol semantics are replace-based, not incremental.
While functionally correct, this creates inefficiencies for one of A2UI’s primary emerging workloads: LLM streaming.
Proposed Improvement
Introduce an incremental update pattern compatible with the existing
updateDataModelstructure.Instead of redefining message types, this proposal suggests extending semantics via an optional operation mode.
Example:
{ "version": "v0.9", "updateDataModel": { "surfaceId": "chat", "path": "/messages/0/content", "op": "append", "value": " world" } }Client behavior:
rather than replacing the value.
Why This Matters
Efficient streaming enables:
✅ scalable chat interfaces
✅ document-scale generation
✅ reduced bandwidth consumption
✅ smoother progressive rendering
✅ improved mobile performance
Without incremental updates, developers must repeatedly resend the full accumulated state.
Alternative (Non-Spec) Workarounds Today
Current implementations typically resort to:
While workable, these approaches introduce additional complexity that could be standardized.
Compatibility
This proposal is fully backward compatible:
If
opis omitted:If:
client performs incremental mutation.
Motivation
A2UI is increasingly positioned as a protocol for AI-driven, streaming-first interfaces.
Standardizing an incremental data update mechanism would:
Beta Was this translation helpful? Give feedback.
All reactions