You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 5, 2024. It is now read-only.
(apologies if the text has inconsistencies, it's written at 2am)
During the 13th of April 2h workshop, I raised a point couple times regarding implications of TSP design to application level adoption & interoperability.
The discussion of splitting TSP into a ‘as-thin-as-possible’ core layer & N-number overlays or protocols (Trust Task Messaging Protocol, alpha, beta… omega, etc.) may have considerable implications on the application layer. I’m not saying we should not do it, but I would advocate to understanding and making the overlaying design choices based on their implications to the application-level where adoption happens.
To ensure you understand what I mean by application-level trust tasks: They are tasks that are designed and created by developers who are most familiar with application-level SDK’s, designing & implementing payment protocols, authentication protocols, etc. Essentially developers, that may or may not care about how the TSP functions. A comparison to regular web-stack would be a javascript framework developer or OpenID4VC implementor, that do not need to care how TCP/IP works, but know how HTTP-protocol works.
i.e. an application developer of trust tasks should understand that there are DIDs/AIDs and there are choices they can make in regard to TSP or TTMP (& other overlays), but they do not need to understand how TSP fully works.
Who makes the choice? Do we create interoperability problems?
By stating that the TSP core is as thin as possible, and saying that anyone can develop their overlay, we require that somebody also makes a choice on which higher-level (e.g. TTMP) protocols are used. Instead, we should carefully design the layers so that they can span as well as the core. The more non-spanning layers are available, the more complexity is given to application trust task developers, introducing high learning curve and reduced adoption due to complexity.
Rigid Spanning Layer?
By keeping the TSP as rigid as possible, while maintaining level of modularity in its design, we ensure that the trust task applications do not need to make difficult interoperability choices. The point is that “The Trust Spanning Overlays” needs to be able to span as well with the core. If one side may have different overlays available for use than the other, we run into interoperability problems.
The T3 Trust Tasks should be dynamically configurable so that they can choose their set of overlays, without the risk that the other party in the interaction doesn’t have that feature.
So, what are the design implications of TSP to interop & adoption on application-level trust tasks?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
(apologies if the text has inconsistencies, it's written at 2am)
During the 13th of April 2h workshop, I raised a point couple times regarding implications of TSP design to application level adoption & interoperability.
The discussion of splitting TSP into a ‘as-thin-as-possible’ core layer & N-number overlays or protocols (Trust Task Messaging Protocol, alpha, beta… omega, etc.) may have considerable implications on the application layer. I’m not saying we should not do it, but I would advocate to understanding and making the overlaying design choices based on their implications to the application-level where adoption happens.
To ensure you understand what I mean by application-level trust tasks: They are tasks that are designed and created by developers who are most familiar with application-level SDK’s, designing & implementing payment protocols, authentication protocols, etc. Essentially developers, that may or may not care about how the TSP functions. A comparison to regular web-stack would be a javascript framework developer or OpenID4VC implementor, that do not need to care how TCP/IP works, but know how HTTP-protocol works.
i.e. an application developer of trust tasks should understand that there are DIDs/AIDs and there are choices they can make in regard to TSP or TTMP (& other overlays), but they do not need to understand how TSP fully works.
Who makes the choice? Do we create interoperability problems?
By stating that the TSP core is as thin as possible, and saying that anyone can develop their overlay, we require that somebody also makes a choice on which higher-level (e.g. TTMP) protocols are used. Instead, we should carefully design the layers so that they can span as well as the core. The more non-spanning layers are available, the more complexity is given to application trust task developers, introducing high learning curve and reduced adoption due to complexity.
Rigid Spanning Layer?
By keeping the TSP as rigid as possible, while maintaining level of modularity in its design, we ensure that the trust task applications do not need to make difficult interoperability choices. The point is that “The Trust Spanning Overlays” needs to be able to span as well with the core. If one side may have different overlays available for use than the other, we run into interoperability problems.
The T3 Trust Tasks should be dynamically configurable so that they can choose their set of overlays, without the risk that the other party in the interaction doesn’t have that feature.
So, what are the design implications of TSP to interop & adoption on application-level trust tasks?
Beta Was this translation helpful? Give feedback.
All reactions