Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Representation of capabilities #15

Open
Gozala opened this issue Nov 16, 2021 · 52 comments
Open

Representation of capabilities #15

Gozala opened this issue Nov 16, 2021 · 52 comments

Comments

@Gozala
Copy link

Gozala commented Nov 16, 2021

Following example illustrated the issue https://observablehq.com/d/7fb5d13d63f667b9, for convenience including snippet here as well

const service = await UCAN.EdKeypair.create()
const user = await UCAN.EdKeypair.create()

const root = await UCAN.build({
  issuer: service
  audience: user.did(),
  capabilities: [
    {
          "wnfs": "gozala/",
          "cap": "OVERWRITE"
        }
  ]
})

const user2 = await UCAN.EdKeypair.create()
const child = await UCAN.build({
  issuer: user
  audience: user2.did(),
  capabilities: [
    {
          "wnfs": "gozala/photos",
          "cap": "OVERWRITE"
        }
  ]
  proof: UCAN.encode(root)
})

assert(await UCAN.isValid(child) === true)
@matheus23
Copy link
Member

matheus23 commented Nov 16, 2021

That's right.

The checking for capabilities needs to be configurable, and the wnfs capability logic should be supported by default by this library.

One way this could look:

type CapabilityCheck = (parent: Capability, child: Capability) => Promise<boolean>

export async function isValid(ucan: Ucan, attenuationChecks?: Record<string, CapabilityCheck>): Promise<boolean>

// usage:
const wnfsCapability: CapabilityCheck = async (parent, child) => {
  if (typeof parent.wnfs !== "string" || typeof child.wnfs !== "string") return false
  if (typeof parent.cap !== "string" || typeof child.cap !== "string") return false
  if (parent.cap !== child.cap) return false // simplified for now
  if (child.wnfs.startsWith(child.wnfs) && parent.wnfs.endsWith("/")) return true
  return false
}
await isValid(child, { "wnfs": wnfsCapability })

The reason this isn't supported by this library today is that we're using something else in production right now. We're in the progress of both upgrading what we're running in production to the new version of the spec (which is what this library is based on) and doing the necessary upgrades to this library to support our production use-cases.

@Gozala
Copy link
Author

Gozala commented Nov 16, 2021

The checking for capabilities needs to be configurable, and the wnfs capability logic should be supported by default by this library.

I was going to suggest exactly that, but have not got around to doing it yet. However I would suggest alternative API instead more along the lines of:

/**
 * Validates that UCAN does not escalate capabilities. Returns holds set of errors for capabilities that
 * escalate it's privileges.
 */
declare function validate <C>(ucan: Ucan<C>, attenuationChecks?: Record<string, CapabilityCheck<C>>): Promise<Result<EscalationError<Capability>[], void>>

type CapabilityCheck<Capability> =
   (parent: Capability, child: Capability) => Promise<Result<EscalationError<Capability>, void>>

interface EscalationError<Capability> extends Error {
  available: Capability
  claimed: Capability
}

type Result<X, T> =
  | { ok: true, value: T }
  | { ok: false, error: X }

@matheus23
Copy link
Member

Minor nitpick: This forces a single type of capability per UCAN (the C), but usually one wants to support UCANs with many types of capabilities, possibly even foreign ones that we don't know about.


I agree that we need to provide more information than isValid. However, I'm thinking of something more like isValid(...): Promise<Capability[] | false> right now.

I disagree about providing that much information on errors. It's not like that would be useful for error messages in apps to users with broken UCAN chains. It's more useful for developers, but I'd rather keep the API simple and point developers to debugging tools like https://ucancheck.fission.app/.

@expede
Copy link
Member

expede commented Nov 16, 2021

Not sure if helpful, but an idea:

We could also use C to capture "the ones we know about" and wrap it in a specialized Result type that includes "unknown, treating this as a string"

type CapResult<X, T> =
  | {ok: true, value: T, unknowns: string[] }
  | {ok: false, error: X}

Of course you won't be able to consume the unknowns semantically, but it leaves it open to extension while acknowledging the limitations of the type.

@expede
Copy link
Member

expede commented Nov 16, 2021

Using validation tools like https://ucancheck.fission.app/ is undoubtedly helpful in development, though! I think everyone on this thread is a fan of what types buy us, though they're certainly not a silver bullet when you don't know the complete type ahead of time

@Gozala
Copy link
Author

Gozala commented Nov 16, 2021

I also think that fact that there is no general way to compare capabilities might be a design flaw. While it makes system open ended it also makes it bit confusing, I'm still not sure what to put in $TYPE, $IDENTIFIER and $CAPABILITY and maybe little more constrained structure would make help there.

It appears that cap is a discriminant field by which capabilities are identified. But then $IDENTIFIER is also kind of that.

I think it would be better if capabilities

Minor nitpick: This forces a single type of capability per UCAN (the C), but usually one wants to support UCANs with many types of capabilities, possibly even foreign ones that we don't know about.

Well I was assuming C would be a type union, so yes it would be a single type but you could differentiate

@Gozala
Copy link
Author

Gozala commented Nov 16, 2021

I disagree about providing that much information on errors. It's not like that would be useful for error messages in apps to users with broken UCAN chains. It's more useful for developers, but I'd rather keep the API simple and point developers to debugging tools like https://ucancheck.fission.app/.

So when we get a request on the service endpoint with escalated capabilities we would like to respond with an error explaining what the problem is. Rather than just responding with invalid token.

I think it especially useful for user defined capabilities where tools like https://ucancheck.fission.app/ can not really compare unknown capabilities or tell you which one escalates and how. Custom checker would have domain knowledge and therefor provide more meaningful explanation.

Additionally you could always wrap that into simpler API:

const isValid = async <C> (ucan:UCAN<C>, check:(parent:C, child:C) => boolean) => {
   const result = await validate(ucan, (parent, child) => {
       if (await check(parent, child)) {
           return { ok: true, value: undefined }
       } else {
          return { ok: false, error: new EscalationError(parent, child) }
       }
   })

  return result.ok
}

@Gozala
Copy link
Author

Gozala commented Nov 16, 2021

This may be a separate issue, but is related enough that I think it would make sense to capture here. While I see value in capabilities been open ended, I do thing that makes them kind of confusing and possibly counter productive. More specifically from the definition:

capabilities is an array of resources and permission level formatted as:

{
  $TYPE: $IDENTIFIER,
  "cap": $CAPABILITY
}

It is all but obvious what one should put in $TYPE, $IDENTIFIER and $CAPABILITY. Specifically it is not exactly clear which of these is discriminant, is it $TYPE or is it a $CAPABILITY or both ?

I suspect that putting little more constraint and structure on capability definition may make them less confusing and possibly provide a way to:

  1. Recognize capabilities that are in the same category
  2. Make them comparable

That is to suggest I think it might be a good idea if capability represented conceptual triple of [subject, op, scope] (which I guess roughly corresponds to $TYPE, $CAPABILITY, $IDENTIFIER) where subject+op is discriminant and scope is generally comparable, which would lead to following type representation:

type Capability<
  OP extends string = string,
  Subject extends string = string,
  Restriction extends Contraint = Contraint
> = {
   [key in `${OP}:${Subject}`]: Restriction
}


type Contraint =
   | undefined
   | Scope
   | Limit
   | ConstraintGroup

type ConstraintGroup = {[key: string]: string|number}
type Limit = number
type Scope = string


const checkContraint = (parent:Constraint, child:Constraint) => {
   switch (typeof parent) {
      // If child scope is contained in parent scope constraint holds
      case 'string':
         return checkScope(parent, child)
      // If child parent limit is greater than of child constraint holds
      case 'number':
         return checkLimit(parent, child)
      // If parent has no contraint it's unlimited so contraint holds
      case 'undefined':
         return holds()
      default:
         return typeof child === 'undefined'
                ? exceeds(parent, child)
                : typeof child !== 'object'
                ? incomparable(parent, child)
                : checkGroup(parent, child)
   }
}


const checkScope = (parent:Scope, child:Constraint, id?:string) =>
   child === undefined
   ? exceeds(parent, '', id)
   : typeof child !== 'string'
   ? incomparable(parent, child, id)
   : child.startsWith(parent)
   ? holds()
   : exceeds(parent, child, id)

const checkLimit = (parent:Limit, child:Constraint, id?:string) =>
   child === undefined
   ? exceeds(parent, Infinity, id)
   : typeof parent !== 'number'
   ? incomparable(parent, child, id)
   : child > parent
   ? exceeds(parent, child, id)
   : holds()


const checkGroup = (parent:ConstraintGroup, child:ConstraintGroup) => {
   let violations:EscalationError<Constraint>[] = []
   for (const [id, value] of Object.entries(child)) {
      const base = parent[id]
      switch (typeof base) {
         // if parent had no such restrition contraints holds because child
         // is more restricted
         case 'undefined':
             break
         // If child limit is greater than of parent it is escalating.
         case 'number':
            const limit = checkLimit(base, value, id)
            violations = limit.ok ? violations : [...violations, limit.error]
            break
         case 'string':
            const scope = checkScope(base, value, id)
            violations = scope.ok ? violations : [...violations, scope.error]
            break
      }
   }
   return violations.length === 0 ? holds() : escalates(parent, child, violations)
}


declare function holds():({ok:true})
declare function incomparable <C1, C2> (parent:C1, child:C2, id?:string):{ok:false, error:IncomparableError<C1, C2>}
declare function exceeds <C extends Constraint>(parent:C, child:C, id?:string):{ok:false, error:EscalationError<C>}
declare function escalates <C>(parent:C, child:C, violations:EscalationError<Constraint>[]):{ok:false, error:EscalationError<C>}

interface IncomparableError<C1, C2> extends EscalationError<C1|C2> {
   parent: C1
   child: C2
}

interface EscalationError<C> extends Error {
   parent: C
   child: C
}

@Gozala
Copy link
Author

Gozala commented Nov 16, 2021

Thinking more about this I think it would be better to represent ucan capabilities not as an array of Capability but rather as a dictionary of constraints as that structure would prevent tokens from having overlapping capabilities. Which could look something like this:

type Capabilies<ID extends Capability = Capability> = {
   [key in ID]?: Constraint
}

type Capability<OP extends string = string, Subject extends string = string> = `${OP}:${Subject}`


type Constraint =
   | undefined
   | Scope
   | Limit
   | ConstraintGroup

type ConstraintGroup = {[key: string]: string|number}
type Limit = number
type Scope = string

const check = <ID extends Capability> (parent:Capabilies<ID>, child:Capabilies<ID>) => {
   const violations = []
   for (const [id, contraint] of iterateCapabilities(child)) {
      const result = checkContraint(contraint, child[id])
      if (!result.ok) {
         violations.push(result.error)
      }
   }
   return violations.length === 0 ? holds() : escalates(parent, child, violations)
}

const iterateCapabilities = <ID extends Capability, C>(capabilities:Capabilies<ID>) =>
   <Array<[ID, Constraint]>>Object.entries(capabilities)

@expede
Copy link
Member

expede commented Nov 17, 2021

prevent tokens from having overlapping capabilities

There are use cases for overlapping capabilities. For example, I may want to be able to do atomic updates to multiple file systems, in which case I need several WNFS capabilities. Inside a single file system, I may only have write access to two directories, but not a common parent.

(There are of course also use cases for disparate capabilities)

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

There are use cases for overlapping capabilities. For example, I may want to be able to do atomic updates to multiple file systems, in which case I need several WNFS capabilities. Inside a single file system, I may only have write access to two directories, but not a common parent

I think that aligns with proposed encoding e.g. here you have capability to write into /gozala/notes/ & /gozala/archive/ but not to /gozala/ or any of it's other child directory.

{
   "write:wnfs": "/gozala/notes/",
   "write:wnfs": "/gozala/archive/"
}

I was referring to different kind of overlaps e.g example below, where both capabilities are the same but different restrictions are imposed. Which complicates things as it would require extra logic for merging those.

[
   {
      "w3": { maxSpace: 7000 }
      "cap": "upload"
   },
  {
      "w3": { maxSpace: 6000, maxFiles: 50  }
      "cap": "upload"
  }
]

In the below example unified capability could be {maxSpace: 7000, maxFiles: 50} or it also could be {maxSpace: 6000, maxFiles: 50} or {maxSpace: 7000}. In contrast encoding via proposed structure avoids that complication as both capabilities correspond to upload:w3 key so token issuer would explicitly merge and set it to one of the values.

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

Oh I am a fool, my first example has same key twice.

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

I think my general sentiment is it would be nice if capabilities were structured in a way that removed any surface for an ambiguity.

I am conceptualizing a capability is an op that you can perform on a given resource, that optionally can be constrained. Which is what motivated me to suggest following structure:

type Capabilies<ID extends Capability = Capability> = {
   [key in ID]?: Constraint
}

type Capability<OP extends string = string, Subject extends string = string> = `${OP}:${Subject}`

I made a mistake earlier but with above definition following will be represented as illustrated below

There are use cases for overlapping capabilities. For example, I may want to be able to do atomic updates to multiple file systems, in which case I need several WNFS capabilities. Inside a single file system, I may only have write access to two directories, but not a common parent.

{
   "write:/wnfs/gozala/notes/": {},
   "write:/wnfs/gozala/archive/": {}
}

Keys here end up as ${CAP}:${TYPE}${IDENTITY} wheres constraints / restrictions can be listed in the corresponding values. Now that I'm looking at it I think it might be better yet to borrow from HTTP and encode capabilities as ${OP} ${RESOURCE} where resource is represented via uri and therefor URI protocol could fill the role of TYPE, this would translate above to:

{
   "PATCH wnfs://gozala.io/notes/": {}
   "PATCH wnfs://gozala.io/archive/": {}
}

I also like how intuitive it becomes to integrate this into existing HTTP API for auth, for web3.storage API we could directly translate our HTTP endpoints to capabilities:

{
   'POST web3.storage:/car': {
       requestLimit: 32 * GiB,
       sizeLimit: 1 * TiB
   },
   'GET web3.storage:/car': {} // can request any car
   'HEAD web3.storage:/car': {} // can get stats
   [`GET web3.storage:/uploads/${user}`]: {} // can check user uploads
}

Derived capability from above:

{
   'POST web3.storage:/car': { // inherits requestLimit and restricts sizeLimit
       sizeLimit: 60 * GiB
   },
   'GET web3.storage:/car': {} // can request any car
   'HEAD web3.storage:/car': {} // can get stats
   // can only list subset of uploads created by this token
   [`GET web3.storage:/uploads/${user}/${audience}`]: {} 
}

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

It is however worse highlighting though that such a structure would only support encoding constraints where all restrictions must be met. It will not allow encoding a structure in which one of the several constraints must be met.

@matheus23
Copy link
Member

First of all: I love to see you so engaged @Gozala ! ♥️

Attenuation Format Changes

I think my general sentiment is it would be nice if capabilities were structured in a way that removed any surface for an ambiguity.

I generally like this line of thinking, however, that'd be more of a spec discussion and as such its benefits should be weighed against breaking existing implementations. This library (that supports the most recent version of the spec) isn't used in production, but as far as I now, Qri are using this version of the UCAN spec in production (https://github.com/qri-io/ucan), so maybe we shouldn't break it until we've thoroughly evaluated this version in production with multiple entities?

More precise Capability Typing

Quoting some code:

type Capabilies<ID extends Capability = Capability> = {
   [key in ID]?: Constraint
}

type Capability<OP extends string = string, Subject extends string = string> = `${OP}:${Subject}`


type Constraint =
   | undefined
   | Scope
   | Limit
   | ConstraintGroup

type ConstraintGroup = {[key: string]: string|number}
type Limit = number
type Scope = string

Those are some cool types 😃
Still, I'd like to keep the types simple. We need to remember that if we have precise types, we still need to do the runtime checking that they're correctly passed from javascript anyway. Having a strongly-typed UCAN as a parameter to isValid means we'd need very sophisticated type checking in decode, and we'd also need to make decode's type checking programmable, as it can't runtime-check capabilities against a generic type.
I do think that's possible to write, but I think it's too complicated for consumers of this library. (I say this as a big fan of haskell and static typing 😄)

Precise Capability Error Messages

I disagree about providing that much information on errors. It's not like that would be useful for error messages in apps to users with broken UCAN chains. It's more useful for developers, but I'd rather keep the API simple and point developers to debugging tools like https://ucancheck.fission.app/.

So when we get a request on the service endpoint with escalated capabilities we would like to respond with an error explaining what the problem is. Rather than just responding with invalid token.

Who will read this error message? A user or a developer?
If it's a user, I don't think it makes sense to show them anything more meaningful - they mostly need a way to get a new token.
And if it is indeed a developer, I would argue this error message is only of limited use, unless the developer takes the invalid UCAN they constructed and looks at it anyway, at which point they could take that and input it into some diagnosis tool (be it a function or the ucancheck app).

Additionally you could always wrap that into simpler API

That's a good point :) I don't think it would hurt to have a function like diagnose in the surface API that tries to give as much information about an invalid UCAN as possible. Something like that is likely really useful for the ucancheck app anyway.
I can see this being developed in the ucancheck app first though, to iterate and figure out what's the most useful information and then port it over here 👍

@expede
Copy link
Member

expede commented Nov 17, 2021

Love the thinking 👍 I'll never walk away from a discussion on type safety 🤓🤓🤓

I totally get why you're raising an eyebrow on the array. I had the same gut reaction at first. An array of tuples and a map have subtly different properties — but that may not actually be a problem here. What we're emulating is a (max) set, and all of these encoding achieve that.

I think my general sentiment is it would be nice if capabilities were structured in a way that removed any surface for an ambiguity.

Agreed — or rather we want to express the appropriate level of overlap. I don't know if an array improves the accuracy of the information that we're presenting. I'm certainly not against updating the spec to have a different encoding, but am unsure if we gain much here by switching to packing more information into keys encoded as flat strings.

For example, in the reused key scenario that you mentioned earlier, what should one do on receiving those keys as JSON? The behaviour is equally defined as it with an array. The top level of the UCAN is a constructive union of all of its capabilities: if they overlap, then great! Even if a user gives the same path multiple times, we take the highest capability.

For example:

[
  {
     "wnfs": { // <- Namespace, AKA which semantics to use
        "path": "/wnfs/gozala/notes/",
        "cap": "write",
        "maxFiles": 3, // Extended as an example, we don't actually do this
        "ifCosignedBy": "did:someoneElse" // Or this
      }
    },
    { // Looser constraint, more power, supercedes above
      "wnfs": {
        "path": "/wnfs/gozala/notes",
        "cap": "write"
      }
    },
    {
      "w3s": {
        "cap": "GET",
        "path": "/car"
      }
    }
]

{
"write:/wnfs/gozala/notes/": {},
"write:/wnfs/gozala/archive/": {}
}
[...]
'GET web3.storage:/car': {}

Oh yeah, the mapping of the UCAN resource to the REST resource is pretty nice ✨

I don't love that we now need to separately parse the keys. Why not encode each as a straightforward map (as the earlier example)? In the longer string version, it feels like we're trying to force a lot of information together, which is a bit "stringly typed".

In terms of increasing the typesafety, I think that there's some fundamental information theoretic challenges if we want to keep this user extensible. Static types need AOT information, which we won't always have. This puts us in the land of term-level validation at least some of the time. A smart constructor (i.e. a forced runtime invariant check) may be a better solution, where the types force us to validate the capability.

To make the further case for smart constructors, they can also be made pluggable in the UCAN library's parser. This includes whatever predicate logic you need for escalation semantics, and nice error messages, with a check that's enforced by the library (assuming that they're using the type checker).

@matheus23
Copy link
Member

I love the idea of packaging up capabilities into

  • a smart constructor helping you not to create a capability that would be invalid
  • a parser for capabilities that ensures it's of a certain type
  • an escalation checker

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

I generally like this line of thinking, however, that'd be more of a spec discussion and as such its benefits should be weighed against breaking existing implementations. This library (that supports the most recent version of the spec) isn't used in production, but as far as I now, Qri are using this version of the UCAN spec in production (https://github.com/qri-io/ucan), so maybe we shouldn't break it until we've thoroughly evaluated this version in production with multiple entities?

Fare enough! What would be a good place to continue this discussion ?

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

till, I'd like to keep the types simple. We need to remember that if we have precise types, we still need to do the runtime checking that they're correctly passed from javascript anyway. Having a strongly-typed UCAN as a parameter to isValid means we'd need very sophisticated type checking in decode, and we'd also need to make decode's type checking programmable, as it can't runtime-check capabilities against a generic type.
I do think that's possible to write, but I think it's too complicated for consumers of this library. (I say this as a big fan of haskell and static typing 😄)

I've typed those in order to be precise in communicating the structure I'm thinking of. Whether those actual definition should be used by individual library is separate discussion IMO. I don't think token parser should take on the role of type checking input, it can continue parsing it as JSON it's just library checking capabilities can recognize a certain structure and discard all the remaining fields / values.
P.S.: Included TS implementation supposed to do that although it's likely far form good as typing it in github issue was not a great idea anyway :P

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

Who will read this error message? A user or a developer?
If it's a user, I don't think it makes sense to show them anything more meaningful - they mostly need a way to get a new token.
And if it is indeed a developer, I would argue this error message is only of limited use, unless the developer takes the invalid UCAN they constructed and looks at it anyway, at which point they could take that and input it into some diagnosis tool (be it a function or the ucancheck app).

Users often create bug reports and better the error messages the better the reports allowing devs to address them effectively. Generating new token will not help if the bug was cased by the code that generated the token, chances are new token will have same issue.

@matheus23
Copy link
Member

Fare enough! What would be a good place to continue this discussion ?

Oh, I didn't mean to send you away or something like that, sorry 😅 I was mostly tryng to argue "that would be a bigger change ('spec discussion') and as such needs to have high payoff".

@expede
Copy link
Member

expede commented Nov 17, 2021

Fare enough! What would be a good place to continue this discussion ?

We can summon Brendan to this thread if needed 😛

I agree that a common place for everyone to collab on this stuff would be great. We're actually about to break a bunch of this stuff out into its own UCAN org and whatnot, but here is good now!

@expede
Copy link
Member

expede commented Nov 17, 2021

as such needs to have high payoff

I'm not even against smaller changes! Let's keep the open conversation going and keep exploring 💯 @Gozala I'm really appreciating the thought & conversation so far!

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

@expede I do share the dissatisfaction with packing two fields into a single key and it may indeed be a bad idea. Maybe it is indeed better to let them be their separate fields but prescribe how multiple capabilities to perform same OP on the same Resource can be unified.

What I was hoping to achieve by packing them together is to make that unification token issuer problem so that verifier does not need to have custom code to do this. In other words arbitrary verifier could check claims are met without domain knowledge. But again on the flip side you can't then express (at least in currently proposed encoding) either meet this criteria or that.

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

I also would like to call out that I find this structure to be really confusing

{
  $TYPE: $IDENTIFIER,
  "cap": $CAPABILITY
}

I think simply making it following would be an improvement

interface Capability<T extends string, I extends string, C extends string> {
  type: T
  id: I
  cap: C
}

For what it's worth I end up ditching $TYPE: $IDENTIFIER and encoding capabilities and go with this:

const token = await UCAN.build({
  audience: did,
  issuer: service.keypair,
  lifetimeInSeconds: 24 * 60 * 60, // a day
  capabilities: [
    {
      cap: "PUT",
      // @ts-ignore - type is restricted to strings
      storageLimit: 32 * 1.1e12, // 1TiB
    },
    {
      // can only list uploads for the given user
      cap: "LIST",
      scope: `/upload/${did}/`,
    },
  ],
})

I might be missing something but I find $TYPE redundant, seems like it should be derivable from the resource identifier itself.

@Gozala
Copy link
Author

Gozala commented Nov 17, 2021

@expede I do share the dissatisfaction with packing two fields into a single key and it may indeed be a bad idea. Maybe it is indeed better to let them be their separate fields but prescribe how multiple capabilities to perform same OP on the same Resource can be unified.

On the second thought this may not be necessary. Verifier could compare each claimed capability to each capability in the parent and if any is met claim is valid (basically gives either satisfy constraint A or B).

I am still biased towards making issuer do more so that verifier has to do less.

However keys also made scopes explicit, which could be what I'm dissatisfied in current spec. Specifically given these capabilities

[
   {
       "wnfs": "gozala/",
       "cap": "OVERWRITE",
        "role: "wizard"
    },
    {
       "wnfs": "gozala/photos",
       "cap": "OVERWRITE",
       "role": "painter"
    }
]

It is not clear if wnfs is the resource been constrained or role. So maybe just formalizing that would be good enough incremental improvement ?

@matheus23
Copy link
Member

I also would like to call out that I find this structure to be really confusing

{
  $TYPE: $IDENTIFIER,
  "cap": $CAPABILITY
}

I think simply making it following would be an improvement

interface Capability<T extends string, I extends string, C extends string> {
  type: T
  id: I
  cap: C
}

[...]

[
   {
       "wnfs": "gozala/",
       "cap": "OVERWRITE",
        "role: "wizard"
    },
    {
       "wnfs": "gozala/photos",
       "cap": "OVERWRITE",
       "role": "painter"
    }
]

It is not clear if wnfs is the resource been constrained or role. So maybe just formalizing that would be good enough incremental improvement ?

Yeah I agree with @Gozala. It seems clearer to have constant keys, just like everywhere else in the UCAN. I'd propose this:

interface Capability {
  type: string
  cap: string
}
// Examples
const att = {
  type: "wnfs",
  cap: "SOFT_DELETE",
  path: "/public/Apps" // we're allowed to add any other fields besides `type` and `cap`
}

But then the question becomes "why require the cap field?", since it doesn't matter semantically without knowing the particular type of capability.

@expede
Copy link
Member

expede commented Nov 30, 2021

But then the question becomes "why require the cap field?", since it doesn't matter semantically without knowing the particular type of capability.

The reason that this type feels awkward is that we're trying to capture heterogeneous types as a monolithic type. They're not the same shape, since it's open to extension. We can use subtyping at the concrete callsite, but not outside of that.

I think that @Gozala is on the right track for the TS implementation with an interface like...

interface Capability<T extends string, I extends string, C extends string>

...where we treat that as a failable parser (AKA a Result type). We have different levels of knowledge between compiletime (type checker) and runtime (parser), and I think that we would want the type should reflect that reality. As we add to this library, it'll handle the parsing, including splitting the parser result into "valid", "invalid", and "unknowns", which is pretty much just the classic "parse, don't validate".

For the direct issue at hand: I'm certainly not against moving to common keys. Maybe not type, since that term is already so heavily used.

@expede
Copy link
Member

expede commented Nov 30, 2021

Well this thread has gotten way off the initial issue 😅 It's all good, we need to have these discussions anyways!

@matheus23, @bgins, and I chatted about this a bit. We have a few options to ponder. At its heart, the data that we're trying to represent is:

  1. Which semantics to apply (AKA namespace, e.g. fission/wnfs)
  2. Which resource (e.g. boris.fission.codes/photos/vacation)
  3. Potency / type of access (e.g. READ, WRITE, SEND, DELETE, and so on)
  4. "Open fields" — Any other data specific to the capability

With that in mind, the question becomes how we want to typecheck and serialize that. How to parse these is an open discussion in another issue, but I think making that as easy as possible in JS is the right move (I'll comment over there). Which fields are required is a bigger question as we see more use cases:

Structure

Unspecified Structure

Let capabilities be any object, and leave the format checking to the validator. The downside here is that there's zero namespacing, but you can pass in an arbitrary sum type to represent the capabilities you have.

Namespaced Only

Only require a single namespace key, and everything else is free form. This has the advantage of avoiding overlapping keys being mistaken for the wrong semantics. It continues to have some challenges.

Require All

Have strictly required keys for 1-3 in the top list.

Decision Points

Target resource should be an identifier of some kind

We can break these up, like {root: "boris.fission.codes", path: ["vacation", "photos"]}, but also I can't think of a concrete use case for this versus forcing it to be a string. If there was something more structured lines, we could do it in the open fields.

Require/Structure the Potency Field?

The potency is important for a lot of use cases. Having this forced to be set makes extensibility clearer, but is not strictly speaking required.

Is a simple string enum enough? I can imagine cases where it's more complex than a string (e.g. Unix-style RWX, though that's often expressed as a string in practice). Maybe requiring the key but having the value be freeform? If it's freeform, how is it different from the other open fields?

@expede
Copy link
Member

expede commented Nov 30, 2021

Just to keep stuff rolling, now do we feel about something like this:

[
  {
    ns: "fission/wnfs", // Namespace
    rs: "boris.fission.codes/vacation/photos/", // Resource
    pt: "APPEND" // Potency
  },
  {
    ns: "ucan/storage",
    rs: "web3.storage/expede",
    pt: {
      max_gb: 100,
      write: true,
      delete: true
    }
  },
  {
    ns: "ucan/storage",
    rs: "web3.storage/boris",
    pt: {delete: true}
  },
  {
    ns: "ucan/rest",
    rs: "example.com/api/v2/",
    pt: ["POST", "PATCH"]
  }
]

Here we have well-known top-level keys, and one unstructured potency field.

@bgins
Copy link
Contributor

bgins commented Nov 30, 2021

Adding rs as a well-known top-level key is a good idea in my opinion. We could ask ourselves, is there ever a case where the resource should be left open ended? If not, requiring it seems worthwhile because it removes ambiguity and errors that might come of it.

One case that I can think of where you might want to leave the resource open ended is giving access to all resources in a namespace. But in this case, if rs was required, a * could be used to indicate all resources.

We could ask a similar question for potency. Is there ever a case where a potency does not need to be specified? Would an unspecified potency mean "you can do all the things with this resource"? Maybe there are cases where potency doesn't matter?

I'm not sure about potency, but keeping it unstructured makes sense because the structure will depend on the resource.

@Gozala
Copy link
Author

Gozala commented Nov 30, 2021

Some feedback / notes and proposed structure

Whether all 3 fields should be required or not ?

From my limited experience I would suggest making "ns" optional, because:

  1. It only seems to matter when large enough number of capabilities are at play. (In my use cases it had been so far useless, as we end up with same value and decided to just omit)

  2. So far I'm more inclined to encode ns + rs as standard URIs given that

    1. They are fairly common and well understood
    2. They are unique resource identifiers
    3. Protocol schema usually encodes semantics in them.
    4. There is builtin URL parser available in web platform and pretty much everywhere else.

    With that examples above could be encoded as wnfs://boris.fission.codes/vacation/photos/, ucan+storage://web3.storage/expede etc...

    Please Note: I am trying to make an argument for using URLs (although it may have merit), but rather suggesting that by making ns optional use of URLs becomes a reasonable choice.

Should potency field be a string variant or whatever struct ?

I would personally prefer requiring a string & suggest all other details to be pushed into different fields because:

  1. This guides users to have a basic common structure that is easier to understand (I can do PT to RS with some caveats if present).
  2. Keeps capabilities fully extensible, encode all the constraints / caveats as fields other than reserved ones.
  3. This also encourages more granular capabilities (e.g. no pt: ["POST", "PATCH"] but rather two capabilities one for POST and other for PATCH, which has it's tradeoffs but conceptually it is simpler).
  4. Maps well with HTTP, pt -> method / verb, caveats / extensions -> HTTP headers.

Name shedding

Ultimately this does not matter, yet clear naming can aid understanding.

Variant: UCAN do to

I like to think of capabilities as things you can do to a specific resources. Hence it makes sense to me to rename rs to to and pt to to. Names are as short, but IMO more clear:

[
  {
    to: "wnfs://boris.fission.codes/vacation/photos/", // Resource
    do: "APPEND" // Potency
  },
  {
    to: "ucan+storage://web3.storage/expede",
    do: "PATCH" // write + delete although I'd prefer encoding them separately
    max_gb: 100
  },
  {
    to "ucan+storage://web3.storage/boris",
    do: "DELETE"
  },
  {
    to: "ucan+rest://example.com/api/v2/",
    do: ["PATCH", "POST"] // ok fine we can support arrays too, but not a fan
  }
]

P.S.: I like more generic to as it can mean scope as opposed to resource, but maybe it's just English been my non-native language.

Variant 2.

Less funky option would be:

  1. To rename rs to id, I think it's as short but more clear.
  2. I would rename pt to can, 1 char longer but seems more clear.

@Gozala
Copy link
Author

Gozala commented Nov 30, 2021

Adding rs as a well-known top-level key is a good idea in my opinion. We could ask ourselves, is there ever a case where the resource should be left open ended? If not, requiring it seems worthwhile because it removes ambiguity and errors that might come of it.

One case that I can think of where you might want to leave the resource open ended is giving access to all resources in a namespace. But in this case, if rs was required, a * could be used to indicate all resources.

💯 I would go as far as suggesting that it's better to not even have a predefined "catch all", so that users have to define one specific for a domain. Leaving it out is just too ambiguous

@Gozala
Copy link
Author

Gozala commented Nov 30, 2021

We could ask a similar question for potency. Is there ever a case where a potency does not need to be specified? Would an unspecified potency mean "you can do all the things with this resource"? Maybe there are cases where potency doesn't matter?

If you frame potency as a things you can do, I would argue that it always matters. If it feels like it does not it's probably because either access is to granular coarse or new abilities have not been introduced yet.

I'm not sure about potency, but keeping it unstructured makes sense because the structure will depend on the resource.

If pt is treated as actions that can be performed it makes sense to have it required and making all other caveats as separate fields on the capability.

I think this provides better balance between structure and open extensibility.

@expede
Copy link
Member

expede commented Dec 1, 2021

@Gozala thanks for the quick reply! 🙌

Below in more granular detail, but I think that this has mostly come full circle with some refinements.

Required Potency ✅

If you frame potency as a things you can do, I would argue that it always matters. [...] new abilities have not been introduced yet.

Awesome, this is also my gut feel. Back to a required field it is!

No Potency Arrays ✅

ok fine we can support arrays too, but not a fan

Actually same‚ I was mostly grasping for examples of different types as illustration 😛 These should totally be separate capabilities (like it is today)

Wildcards ✅

it's better to not even have a predefined "catch all"

We have a concrete use case for this at Fission. Part of what a UCAN gives you is the ability to delegate permissions without moving keys. Without a wildcard, every new resource you make would send you back to the initial device to re-delegate. You'd also be in a very bad position if you lost that key. With a wildcard, you can essentially say "everything, including future resources (optionally scoped in this area)". We use this to great effect in Webnative.

URIs

So far I'm more inclined to encode ns + rs as standard URIs

I think we chatted about this briefly on our call a week or two back. I like that it builds on existing concepts! If we go this direction, and especially with namespacing, we could consider URNs (e.g. urn:ucan:storage:web3.storage/expede")

The two things that are a bit odd: the first is that email://[email protected] looks weird, as does mailto:, though for different reasons. (But also whatever, for argument sake I can bite that bullet for consistency). The bigger one is from chatting with @matheus23 earlier today on keys for easy comparison and dispatch 👇

In the URI style, you have no choice but to parse the scheme, which can come in several formats (with or without //, one or more :s, and so on). Philipp had raised the excellent point that being able to match on a string is much easier in languages like JS than doing full-blown parsing (which is what we do in our Haskell server). Having that already split out into its own field makes selecting compatible capabilities in proof chains trivial in all languages, not just ones where parsers are the norm.

We could also add some scheme helpers to the validator DSL, but that's starting to drift away from what's directly in the JSON, which could make it harder for folks to learn.

I'm inclined to the split-out version, but as you say, the URI parser is baked in. I don't have a super strong opinion; it's equally easy to adjust for the existing code. @matheus23 any feelings?

Bike Shedding

"There are two hard problems in computer science: naming, caching, and off-by-one errors."

yet clear naming can aid understanding

Precisely. It's annoying from a purely technical perspective, but we do want to make these easy to think about. I used to be in the "semantics over syntax" camp, but have come to acknowledge that it really does matter to human factors.

I would rename pt to can

Haha on brand — I love it! 😍 Done

rename rs to to

Really splitting hairs here, but to is a bit odd here IMO, especially when the outer part of the UCAN is about connecting chains to/from (talking about them may get confusing). with may disambiguate ("with X, you can Y")

[
  {
    with: "ucan+account://boris.fission.name",
    can: "ADMINISTER"
  },
  {
    with: "ucan+store://[email protected]",
    can: "PUT",
    max_gb: 100
  }
  {
    with: "ucan+store://[email protected]",
    can: "DELETE"
  }
]

(I'm personally less concerned about the stray character or two than with clarity.)

@expede
Copy link
Member

expede commented Dec 1, 2021

On the URI/URN concept: it looks like the decodeURI function only checks for malfomedness and decodes punycode, but does not break into parts. URNs and other URIs have less support than URLs with specifically, unless I'm missing an API. The URL constructor does a pretty reasonable job overall, and may be serviceable enough with some manual parsing for our use case on URIs (I think it's just splitting on the first colon unless the scheme is HTTP/WS)

const uri = new URL("ucan:storage:boris.fission.codes")
 // Not ideal
uri.pathname == "storage:boris.fission.codes"
uri.protocol == "ucan:"
usi.hostname == ""

const url = new URL("ucan:storage://boris.fission.codes")
 // Also not ideal
url.pathname ==  "storage://boris.fission.codes"
url.protocol == "ucan:"
url.hostname == ""

const betterURL = new URL("ucan+storage://boris.fission.codes")
// Mostly better?
betterURL.pathname == "//boris.fission.codes"
betterURL.protocol == "ucan+storage:"
url.hostname == ""

// Versus with HTTP

const http = new URL("http://boris.fission.name/foo/bar")
http.pathname == "/foo/bar",
http.protocol == "http:"
http.hostname == "boris.fission.name"

@Gozala
Copy link
Author

Gozala commented Dec 2, 2021

Wildcards ✅

it's better to not even have a predefined "catch all"

We have a concrete use case for this at Fission. Part of what a UCAN gives you is the ability to delegate permissions without moving keys. Without a wildcard, every new resource you make would send you back to the initial device to re-delegate. You'd also be in a very bad position if you lost that key. With a wildcard, you can essentially say "everything, including future resources (optionally scoped in this area)". We use this to great effect in Webnative.

To be clear I'm not opposing unconstrained delegation, I was just arguing that * character may not be a best way to represent it. E.g. in our case we end up encoding resources as URLs and in place of * we just use root path like https://web3.storage/.

I do realize however that this has following limitations:

  1. You need to repeat resources from the capabilities ("/" path may be escalating, as opposed to saying whatever parent has).
  2. If parent has a capabilities do same action on two different resources, * would catch both.

On the other hand those could be viewed not as limitations but rather as a more explicit encoding, e.g. I could see mistakes in case 2, where accesses not to all resources were intended.

That said I do not feel really strong about it, just wanted to highlight why I'm biased against special *.

@Gozala
Copy link
Author

Gozala commented Dec 2, 2021

URIs

So far I'm more inclined to encode ns + rs as standard URIs

I think we chatted about this briefly on our call a week or two back. I like that it builds on existing concepts! If we go this direction, and especially with namespacing, we could consider URNs (e.g. urn:ucan:storage:web3.storage/expede")

The two things that are a bit odd: the first is that email://[email protected] looks weird, as does mailto:, though for different reasons. (But also whatever, for argument sake I can bite that bullet for consistency).

I was kind of handweavy about URIs and URLs because, in practice browsers only support former, yet they do parse URIs as later:

Screen Shot 2021-12-02 at 9 11 00 AM

The bigger one is from chatting with @matheus23 earlier today on keys for easy comparison and dispatch 👇

In the URI style, you have no choice but to parse the scheme, which can come in several formats (with or without //, one or more :s, and so on). Philipp had raised the excellent point that being able to match on a string is much easier in languages like JS than doing full-blown parsing (which is what we do in our Haskell server). Having that already split out into its own field makes selecting compatible capabilities in proof chains trivial in all languages, not just ones where parsers are the norm.

We could also add some scheme helpers to the validator DSL, but that's starting to drift away from what's directly in the JSON, which could make it harder for folks to learn.

Those are valid concerns, especially if you go past URLs and into URI, URN realm. That said const { protocol: ns, pathname: rs, } = new URL(input) is available JS runtimes , and even though paths get mangled they are normalized in a way that can be easily compared.

P.S.: 👇 this is why I suggested + delimiter in ns as opposed to : (browsers also use + delimiters in protocol schemes in various places)

image

I'm inclined to the split-out version, but as you say, the URI parser is baked in. I don't have a super strong opinion; it's equally easy to adjust for the existing code. @matheus23 any feelings?

Please note that I was not making an argument against namespace/type field, I was just suggesting that it should be optional as in some cases it may make more sense to encode that in a resource field instead.

That way if capability omits ns comparison will occur on rs and if those happen to be URLs encoded ${ns}:${rs} things would work out as expected.

@Gozala
Copy link
Author

Gozala commented Dec 2, 2021

The URL constructor does a pretty reasonable job overall, and may be serviceable enough with some manual parsing for our use case on URIs (I think it's just splitting on the first colon unless the scheme is HTTP/WS)

It does support few more known protocols, but more importantly it is well specified so everyone (supposedly) follows it:

https://url.spec.whatwg.org/#url-parsing

@Gozala
Copy link
Author

Gozala commented Dec 2, 2021

Really splitting hairs here, but to is a bit odd here IMO, especially when the outer part of the UCAN is about connecting chains to/from (talking about them may get confusing). with may disambiguate ("with X, you can Y")

I like this! Only thing to consider though is that with is a keyword in JS, so you can't use it for variable name which would make destructuring / contstruction bit annoying. Maybe on, at ?

@Gozala Gozala changed the title Reduced capabilites appear as invalid Representation of capabilities Dec 8, 2021
@matheus23
Copy link
Member

matheus23 commented Dec 17, 2021

Hey @Gozala 👋
This issue went all over the place. But the very original issue is now solved & merged 🎉
I've created an API that allows you to plug in your own semantics for how capabilities are allowed to be delegated and checked.
I've also added the public wnfs capability semantics. See e.g. this in the tests:

it("works with a simple example", async () => {
const { leaf, ucan, chain } = await makeSimpleDelegation(
[{
wnfs: "boris.fission.name/public/Apps/",
cap: "OVERWRITE",
}],
[{
wnfs: "boris.fission.name/public/Apps/appinator/",
cap: "REVISE",
}]
)
expect(Array.from(wnfsPublicCapabilities(chain))).toEqual([
{
info: {
originator: alice.did(),
expiresAt: Math.min(leaf.payload.exp, ucan.payload.exp),
notBefore: maxNbf(leaf.payload.nbf, ucan.payload.nbf),
},
capability: {
user: "boris.fission.name",
publicPath: ["Apps", "appinator"],
cap: "REVISE",
}
}
])
})

I might actually move the wnfs capabilities (public & private) into the codebase at webnative, and replace the ones in here with a couple examples instead, like a simpler path-based one like wnfs, or one that allows access to a REST endpoint with some HTTP methods.

Please check it out :)

EDIT: Maybe it makes sense to take a look at a simpler example first to figure out the new API:

it("works with a simple example", async () => {
// alice -> bob, bob -> mallory
// alice delegates access to sending email as her to bob
// and bob delegates it further to mallory
const leafUcan = await token.build({
issuer: alice,
audience: bob.did(),
capabilities: [{
email: "[email protected]",
cap: "SEND",
}]
})
const ucan = await token.build({
issuer: bob,
audience: mallory.did(),
capabilities: [{
email: "[email protected]",
cap: "SEND",
}],
proofs: [token.encode(leafUcan)]
})
const emailCaps = Array.from(emailCapabilities(await Chained.fromToken(token.encode(ucan))))
expect(emailCaps).toEqual([{
info: {
originator: alice.did(),
expiresAt: Math.min(leafUcan.payload.exp, ucan.payload.exp),
notBefore: maxNbf(leafUcan.payload.nbf, ucan.payload.nbf),
},
capability: {
email: "[email protected]",
cap: "SEND"
}
}])
})

@Gozala
Copy link
Author

Gozala commented Jan 26, 2022

Hey @matheus23 @expede,

I just wanted to check back to see where we are in regards to refining representation of capabilities. We have discussed several options, but I am not sure how to get to a consensus. We have decided to implement support for UCANs in our services & would really love to settle on the representation before we start deploying it.

I really think this library should either choose to:

  1. Provide bit more structure and we discussed nuances here of what it may look like.

  2. Provide no structure and let user define it as it's pleased.
    In this case I imagine that library would not define Capability at all and make it type parameter, that is instead of following definition

    ts-ucan/src/types.ts

    Lines 42 to 51 in e10bdec

    export type UcanPayload<Prf = string> = {
    iss: string
    aud: string
    exp: number
    nbf?: number
    nnc?: string
    att: Array<Capability>
    fct?: Array<Fact>
    prf: Array<Prf>
    }

    It would have this one instead

    export type UcanPayload<Capability = unknown, Prf = string> = {
      iss: string
      aud: string
      exp: number
      nbf?: number
      nnc?: string
      att: Array<Capability>
      fct?: Array<Fact>
      prf: Array<Prf>
    }

    Additionally it would be great to have a utility function but can also be out of scope:

     // Takes set of claimed capabilities and checks if those are met by the given UCAN. Give verify function
     // is passed `claimed` capability and returns an array of subset that satisfying the claim (If empty claim
     // is not met). Function will walk the proof chain and verify that claims are met returning true `{ok:true}` 
     // when they are, or returning a chain of unmet capability.
     function check <Capability> (
       claims: Capability[],
       ucan: Ucan<Capability>,
       verify:(claim:Capability, capabilities:Capability[]) => Array<Capability[]> | Promise<Array<Capability[]>>
     ):Promise<Capability[]>

    This way service will be able to define it's own format for capabilities and a verifier to validate claims.

@expede
Copy link
Member

expede commented Jan 26, 2022

Hey @Gozala

Thanks for the ping!

We have decided to implement support for UCANs in our services & would really love to settle on the representation before we start deploying it.

Amazing 🎉

It would have this one instead

export type UcanPayload<Capability = unknown, Prf = string> = {

Yup that makes sense to me. We're doing essentially the same thing in hs-ucan that we're extracting out as well.

Additionally it would be great to have a utility function but can also be out of scope:

Yeah, there's two approaches here, validation versus parsing. I'd recommend something that looks like the below, but also agree that we can do this one next.

 function check <Capability> (
   claims: Capability[],
   ucan: Ucan<Capability>,
   verify: (claim: Capability, capabilities: Capability[]) => Promise<boolean>
 ): Promise<{from: PK, to: PK, until: UTCTime goodCaps: Capability[], unknownCaps : string[]} | {errors: Error[]}>

@matheus23 and I were talking about adding a bunch of helpers of this kind to this library. We'll put together some API types and loop you in.

Which Version?

If you're implementing the existing version, then the above makes sense verbatim. I've taken a bunch of your feedback on with... can... and formatting, and am incorporating it into the next version of the spec, which I want to have done this week. Per our previous discussion, it would also make implementation easier for your use case.

0.8 Spec Status

I almost have the updated spec with some of your feedback wrapped up here, though the bit you're interested in I think isn't finished here yet?

https://github.com/ucan-wg/spec/pull/10/files

(Also FYI https://github.com/ucan-wg/spec/pull/10/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5R354, let me know if you're comfortable with that in there)

@matheus23
Copy link
Member

Re: a check function. Does this do the trick for you @Gozala?

export function canDelegate<A>(semantics: CapabilitySemantics<A>, capability: A, ucan: Chained): boolean {

This can be used to make sure that a given UCAN actually provides claimed capability, given some semantics for that capability.

Also, this might be related to your wish for a generic capability type: The type A in there, is basically what you have as Capability above.

There are some reasons for why I've done it this way:

  1. Capabilities per 0.7 spec are required to be a record with the cap key + another key, so that's what we're parsing it into.
  2. UCANs can contain "unrelated" capabilities to your service and services should be able to handle that. Yes, you can parse into MyServiceCapabilities | unknown but I don't think that's better than moving the generic-ness over to the canDelegate function.

If you're looking for more feedback than only a boolean of whether it actually delegates or not - you'll get more information from parsing the JWT into the Chained type - or if you want to, you can also take apart the canDelegate function and get your hands on more info about delegation like that.
I know that many errors thrown in the library so far are "stringly typed". That's something I'd love to improve. Suggestions on how to do that best are welcome!

@Gozala
Copy link
Author

Gozala commented Jan 27, 2022

Yeah, there's two approaches here, validation versus parsing. I'd recommend something that looks like the below, but also agree that we can do this one next.

 function check <Capability> (
   claims: Capability[],
   ucan: Ucan<Capability>,
   verify: (claim: Capability, capabilities: Capability[]) => Promise<boolean>
 ): Promise<{from: PK, to: PK, until: UTCTime goodCaps: Capability[], unknownCaps : string[]} | {errors: Error[]}>

@expede I might be missing something but I'm not sure how verify that returns boolean can work. The reason my variant had following signature:

   verify:(claim:Capability, capabilities:Capability[]) => Array<Capability[]> | Promise<Array<Capability[]>>

Is so that check implementation could walk up the proof chain to verify that claims are actually met. Idea here is that given the claim verify will return one or several sets of capabilities in the token that satisfy the claim. Afterwards check can verify that one of the returned sets is satisfied by the parent token (meaning they do not escalate). And keep walking up the chain until the root is token.

If verify just returns a boolean than you'd have to verify that child token does not escalate any of the capabilities not just the ones needed for the particular claim. Which is probably not ideal because verify may not be aware of some capabilities even if it is aware of the claimed ones.

I am also not sure I understand return type in your signature specifically not sure what PK stands for.

{from: PK, to: PK, until: UTCTime goodCaps: Capability[], unknownCaps : string[]}

From the unknownCaps I am guessing idea is that check will tell what are the capabilities it is unable to compare, which may be a good idea, yet I feel like check should not bother comparing capabilities that have not been claimed, and if some are claimed verify should know how to compare. Maybe verify can return result type as well to signal which capabilities it's unable to compare, but then again I'm not sure if added complexity would provide enough benefit.

0.8 Spec Status

Exciting! I'll take a look at PR and provide feedback there.

@expede
Copy link
Member

expede commented Jan 27, 2022

@Gozala

Is so that check implementation could walk up the proof chain to verify that claims are actually met.

Oh yeah, you're right 💯 We're doing this lazily so need to know which ones to follow

I am also not sure I understand return type in your signature specifically not sure what PK stands for.

Public Key, though I suppose it should be DID if we want this to be sufficiently general to handle other methods than did:key. You often want to record the agent delegating the action (debugging, logs), and need to know who the root owner of a resource is claimed to be (e.g. who owns the storage quota, who's filesystem is this?)

I feel like check should not bother comparing capabilities that have not been claimed, and if some are claimed verify should know how to compare.

It's sometimes useful to know that there's stuff that you are unable to use. It's not a failure on a Result type, since it's not failure — it's stuff that it doesn't know what to do with. It's not invalid, it's unknown (essentially like thinking in constructive logic).

For example, if you have a collaborative process that knows how to discharge email-related actions, you

Exciting! I'll take a look at PR and provide feedback there.

Thank you 🙏

@expede
Copy link
Member

expede commented Jan 27, 2022

@Gozala whoops, I also missed @matheus23's comment above:

#15 (comment)

It sounds like library may have the relevant functionality added since we had previously spoken. If it doesn't, we may be not be understanding the specific needs that are being missed. Give it Philipp's comment a look, and we can also sync live for lower-latency conversation.

@expede
Copy link
Member

expede commented Jan 27, 2022

Actually @matheus23 that function doesn't select which witnesses were used, correct? That may be what he needs. Do we have a function that can enable that style of lazy check?

@Gozala
Copy link
Author

Gozala commented Jan 27, 2022

Re: a check function. Does this do the trick for you @Gozala?

export function canDelegate<A>(semantics: CapabilitySemantics<A>, capability: A, ucan: Chained): boolean {

@matheus23 I had to read through the CapabilitySemantics definition multiple times to understand it. Here are my immediate reactions:

  • 💔 I think name is confusing and something like CapabilityParser would be a lot less confusing even if imprecise.
  • 💚 I like that it takes care of comparing capability at a time vs having to comparing to a set (as in my proposal)
  • 💔 On the other hand I'm concerned about something @expede mentioned in the past. It may be that claimed capability isn't satisfied by any specific available capability but is satisfied by a group e.g. claim to "modify" may be satisfied by capabilities to "read", "add", "delete". That is what my proposed interface attempted to handle, but maybe it's not the right tradeoff in terms of complexity vs flexibility.
  • 💚 I love that it can identify unknown capabilities.
  • 💔 I dislike return type of tryDelegate specifically disambiguating between A | CapabilityEscalation<A> does not seems obvious and would much rather have a more straightforward union e.g. Ok | EscalationError | UnrelatedError
  • Capabilities per 0.7 spec are required to be a record with the cap key + another key, so that's what we're parsing it into.

I'm not sure I fully follow this, do you mean parsing it from instead ? In which case I understand it, although to be honest this indirection kind of complicates things. I think things would have been easier if CapabilitySemantics was something like this:

export interface CapabilitySemantics<C extends Capability> {
  /**
   * Returns capability back or `null` if capability is not supported.
   */
  tryParsing(cap: C): C | null
  /**
   * This figures out whether a given `childCap` can be delegated from `parentCap`.
   * There are three possible results with three return types respectively:
   * - `Ok`: The delegation is possible and results in the rights returned.
   * - `UnrelatedError`: The capabilities from `parentCap` and `childCap` are unrelated and can't be compared nor delegated.
   * - `EscalationError`: It's clear that `childCap` is meant to be delegated from `parentCap`, but there's a rights escalation.
   */
  tryDelegating(parentCap: C, childCap: C): Ok | EscalationError | UnrelatedError
}
  • UCANs can contain "unrelated" capabilities to your service and services should be able to handle that.

💯

Yes, you can parse into MyServiceCapabilities | unknown but I don't think that's better than moving the generic-ness over to the canDelegate function.

I do not understand this, could you please try reframing this ?

@Gozala
Copy link
Author

Gozala commented Jan 27, 2022

Public Key, though I suppose it should be DID if we want this to be sufficiently general to handle other methods than did:key. You often want to record the agent delegating the action (debugging, logs), and need to know who the root owner of a resource is claimed to be (e.g. who owns the storage quota, who's filesystem is this?)

So is to here the did of the audience for leaf cap and from the issuer of the root cap ?

@Gozala
Copy link
Author

Gozala commented Jan 27, 2022

To follow up on my last comment, I guess it would make sense to return representation of the proof chain so it's possible to introspect / capture how claimed capability was met. So maybe instead of a check it should be something like:

type claim <C> = (capability:C, ucan:UCAN<C>, verifier:Verifier<C>):Result<Escalation, Proof>

@expede
Copy link
Member

expede commented Jan 27, 2022

💔 I think name is confusing and something like CapabilityParser would be a lot less confusing even if imprecise.

Strong agree. We need to improve the language for a bunch of this stuff 💯

💚 I like that it takes care of comparing capability at a time vs having to comparing to a set (as in my proposal)
💔 On the other hand I'm concerned about something @expede mentioned in the past. It may be that claimed capability isn't satisfied by any specific available capability but is satisfied by a group e.g. claim to "modify" may be satisfied by capabilities to "read", "add", "delete". That is what my proposed interface attempted to handle, but maybe it's not the right tradeoff in terms of complexity vs flexibility.

Oooh I see, your earlier comments are starting to become a lot clearer!

This isn't "officially" a thing yet in 0.7 (though conceptually you could do it), but is has a section in the 0.8 spec ("rights amplification"). We may not need to support it today, but will in the near-to-medium term.

Per your earlier type...

verify:(claim:Capability, capabilities:Capability[]) => Array<Capability[]> | Promise<Array<Capability[]>>

...this is why you need to pass it all of the proofs and return the matching proof UCAN(s) and some way of pointing at the relevant capabilities inside them, so that you can recur. I think that you'll want a bit more information on both sides of that function, something like...

(issuer: DID, cap: Capability, proofs: Ucan[]) => Result<Error, {proofUcan: Ucan, proofCap: Capability}[]>
// Could alternately use an index inside the returned proofUcan. Passing the entire proof UCAN is just a convenient way to pass all of the relevant data to the next recursive check invocation.

The core ideas here are that:

  1. A UCAN issued by a resource owner doesn't need a proof to delegate it (so you need to know their DID)
  2. The relevant proof capabilities may come from one proof UCAN, or several. The suggested type here (I'm sure that it could be tighter) is flattening the structure to make it easier to work with on output — if there's more than one in the same UCAN, we just add them both to the array separately for simplicity sake

The idea here being that you'll need both the relevant proof's capability, and access to its (recursive) list of proofs or the issuer DID (if the resource is owned by them).

Now, that's very general and takes a lot of information. We can make narrower versions of this function and pass it to a wrapper that will translate it into this more general context. Something like (issuer: DID, cap: Capability, proofCap: Capability): boolean can be translated into the more general form (the earlier type) by a HOF that the library would provide.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants