Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add new caching infrastructure #153

Open
wants to merge 25 commits into
base: main
Choose a base branch
from
Open

Add new caching infrastructure #153

wants to merge 25 commits into from

Conversation

dgryski
Copy link
Member

@dgryski dgryski commented Mar 6, 2025

No description provided.

@dgryski dgryski force-pushed the dgryski/http-cache branch 10 times, most recently from 2dac925 to b9e2d5e Compare March 13, 2025 00:50
@dgryski dgryski force-pushed the dgryski/http-cache branch 2 times, most recently from 58268e8 to c9d31f8 Compare March 14, 2025 19:42
@dgryski dgryski force-pushed the dgryski/http-cache branch from c9d31f8 to b631a1c Compare March 18, 2025 17:15
@cee-dub cee-dub self-requested a review March 20, 2025 20:35
Copy link
Collaborator

@cee-dub cee-dub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So far this is looking good! I want to submit my comments so far, though I still need to review internal/abi/fastly/httpcache_guest.go and integration_tests/httpcache/main.go

Copy link
Collaborator

@cee-dub cee-dub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed integration tests. Still have to read the full implementation.

return nil
}

func testRequestCollapseThree(ctx context.Context) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this collapsing or just normal caching behavior? Does the "slow backend on r mean r2 and r3 are outstanding requests on the key before r can be resolved and that's what makes it collapsing? If so, please add a comment to explain.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Although since wasm is single-threaded I bet the time.Sleep() ends-up blocking and will complete before r2 or r3 call Send.

And indeed yes this is exactly what is happening:

stdout | 840c6049 | service built with tinygo
stdout | 840c6049 | uri= https://http-me.glitch.me/anything/1213559501/
stdout | 840c6049 | sending r1
stdout | 840c6049 | sleeping in AfterSend
stdout | 840c6049 | done sleeping
stdout | 840c6049 | sending r2
stdout | 840c6049 | sending r3

This test will need to be written to some channels and goroutines to ensure the timing I want to happen actually is.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so rewriting the test to use goroutines + channels etc and the call to r2.Send() just blocks. This is because when we yield in the time.Sleep() function to another goroutine which does a blocking hostcall, control never returns to the first goroutine to continue handling the function. Solving this means (I think) exposing a SendAsync() method on fsthttp.Request and moving away from "just write blocking calls -- the user will manage concurrency with goroutines" we had previously operating under.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm starting to think we may also want a SendWait(), but it sounds like that's already how Send() works in TinyGo today, even with a goroutine and a polling loop.


const headerUA = "user-agent"

r.CacheOptions.AfterSend = func(r *fsthttp.CandidateResponse) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to have a sleep to allow for/de-flake actual collapsing?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've never had this test flake. Not sure if that means this is a bad test..

return fmt.Errorf("response.Vary()=%v, want %v", got, want)
}

if got, wantLen := resp.SurrogateKeys(), 10; len(got) < wantLen {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does 10 come from?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Random length to ensure it's "actually" a SHA checksum I guess? Copied from the javascript code.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would love to figure out a correct provenance and make it obvious.

return nil
}

func testAfterSendCandidateResponsePropertiesCached(ctx context.Context) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How important do we think this test case is, and what sources do we have to reconcile the issues?

Seems like a decent candidate for return ErrSkip.

// ;;; the required size is written by the host to `nwritten_out` and the `$buflen`
// ;;; error is returned.
// ;;;
// ;;; At the moment, HTTP cache keys must always be 32 bytes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WUT. How does this deal with cache keys over 32 bytes?! Seems like many URLs would be longer than that...

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at another part of the Rust SDK it seems like this cache key is a salted sha256 of the key, but I don't know if that's something specific to the Rust SDK or behaviour I need to match in Go. OTOH reading the Rust code it seems like the user is just supposed to set the correct cache key in the cache metadata struct with no verification.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding of cache keys for Fastly cache in general is opaque bytes up to some size (like 1500?)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still finding the comment about 32 bytes strange. Check with the team?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That doc comment comes directly from the witx file defining the Compute API. I'll see what I can find out.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right! I didn't think it was from you. Curious about what they say.

@dgryski dgryski force-pushed the dgryski/http-cache branch from ef4dd16 to 14a174d Compare March 25, 2025 22:58
Copy link
Collaborator

@cee-dub cee-dub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only nit-level items remain.

return fmt.Errorf("candidate has age=%v or err=%v, want 0, nil", age, err)
}

if ttl, err := r.TTL(); ttl != 3600 || err != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Must be.

// ;;; the required size is written by the host to `nwritten_out` and the `$buflen`
// ;;; error is returned.
// ;;;
// ;;; At the moment, HTTP cache keys must always be 32 bytes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still finding the comment about 32 bytes strange. Check with the team?

@dgryski dgryski force-pushed the dgryski/http-cache branch from d1b069b to b16011f Compare March 28, 2025 23:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants