Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(pdp): add PDP capabilties #3

Merged
merged 7 commits into from
Nov 11, 2024
Merged

Conversation

hannahhoward
Copy link
Member

@hannahhoward hannahhoward commented Nov 4, 2024

Goals

Define simple invocations for PDP to enable tracking of what happens on a client

Implement

Defines a PDP/accept and PDP/info capability

For Discussion

This refactor currently has a panic in it, and needs modification

@hannahhoward hannahhoward changed the title Feat/pdp capabilties feat(pdp): add PDP capabilties Nov 4, 2024
@hannahhoward hannahhoward mentioned this pull request Nov 4, 2024
pkg/pdp/pdp.go Outdated
pdpTS = ts
}

func PDPAcceptCaveatsType() ipldschema.Type {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not prefix all types with PDP 😅 🙏

@hannahhoward hannahhoward merged commit d3008b1 into refactor/opts Nov 11, 2024
4 checks passed
hannahhoward added a commit to storacha/storage that referenced this pull request Nov 11, 2024
# Goals

Implement PDP through curio

# Implementation

The basic strategy here is as follows:
1. When PDP is present, it replaces a number of parts of the blob
service, as they now happen directly through curio. (I actually toyed
with mirroring the Access and Presigner APIs but it was a little awkward
1. The big chunk of new logic is the aggregator, which is kind of like a
mini-w3filecoin pipeline but a lot simpler. I've built this with the
goal of making it runnable on SQS queues just like the w3filecoin
pipeline, or runnable locally with the job queue. The aggregator can be
viewed in terms of its constituent packages:
1. `pkg/pdp/aggregator/aggregate` handles calculating aggregate roots
and assembling piece proofs (it's like a mini go-data-segment
1. `pkg/pdp/aggregator/fns` contains the core logic of each aggregation
step, minutes side effects/state
1. `pkg/pdp/aggregator/steps.go` contains the actual steps in
aggregation with side effects
1. `pkg/pdp/aggregator/local.go` contains the implementation of
aggregation that can run on a native piece of hardware. AWS version to
come later
1. PieceAdder is another function that handles essentially saying to
Curio "I have a new piece to upload" and Curio responds with an upload
URL -- it's relatively analogous to our blob/allocate actually (except
we do it in this node)
1. PieceFinder gets the piece cid from the blob hash and size.
Importantly, this is probably the ugliest bit of logic for now cause
Curio isn't currently setup for reliable read on write, so there is a
change we will get blob/accept BEFORE we Curio is ready to give us the
piece cid -- that's why there's the retry loop in there for now
1. `pkg/pdp/curio` is just the curio client ot make http calls to curio
1. I also implemented the ReceiptStore cause I needed it
1. I also added a utility for storing ipld types in a data store -- see
`pkg/internal/ipldstore` -- I think this will be useful in future
1. There are essentially one new invocation available on the UCAN
endpoint - `pdp/info` which you can use to get the state of your
aggregation and the inclusion proof. This may still need some changes on
auth though? Not sure
1. `pdp/accept` is a self issued invocation used by the server to track
when PDP is done and save results.


The PR depends on the following:
storacha/go-ucanto#31
storacha/go-capabilities#2
storacha/go-capabilities#3
as well as two new repos:
 https://github.com/storacha/go-piece
https://github.com/storacha/go-jobqueue (extracted from
indexing-service)

---------

Co-authored-by: ash <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants