-
Notifications
You must be signed in to change notification settings - Fork 10k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hybrid Cache API proposal #54647
Comments
Here we go: as usual I'll go back and forth between this and FusionCache to share my experience with it.
LGTM, one question though: can we expect all the "extra state" needed will be in the concrete impl (eg:
Perfect, no notes 👍
Sorry but I did not understand this part, can you elaborate more?
Makes sense, interfaces (at least as of today) are less evolvable, and the only other approach would be interface-per-featture, where new features will be defined in new interfaces that will be added over time and that consumers may check for support (like the opt-in buffered distributed cache), but that is not always possible and in the long run may lead to a lot of different interfaces. Watch out for people asking for interfaces nonetheless though, because testing etc (been there).
Imho it's worth exploring the idea, not so sure about how good it would work in practice over time: anybody knows of an example of a project that successfully used default interface members to evolve it over time without breakings?
In general looks good, but I don't see an overload of Also I don't see methods to specify the distributed cache to use: will it be picked automatically from DI (if one is registered)?
In FusionCache I solved this by having different methods on the builder, like If you are interested in some ideas, read here for more.
LGTM, but out of curiosity: why an interface here and not a class? Not that I necessarily would prefer one, but wouldn't the same rationale for Also, any plan in supporting the fluent builder approach without DI? I mean something like this; // OPTION 1
var builder = new HybridCacheBuilder()
.WithFoo()
.WithBar();
// OPTION 2 (like WebApplication.CreateBuilder)
var builder = HybridCache.CreateBuilder()
.WithFoo()
.WithBar();
var cache = builder.Build(); Currently FusionCache does not support it for... reasons... but I'm thinking about adding it and other libs have already done it.
One note about naming: since there are both
100% agree, there's a lot of value even just for that. After all that's the whole reason a library like Also, it seems the new memory cache thing (discussed here) will not have cache stampede protection, so it would be even more useful. Btw, about the L1: what will you use? The new memory cache mentioned above, the old
Agree, even though some people will end up asking for the sync version because sometimes that's the only thing you can do (luckily these places are fewer and fewer every day), just warning you.
Yes, good call, no absolute terms 👍
Does this mean that compression will be a cross-cutting concern implemented by the
Naming is hard. I would suggest a more specific
Naming is hard. I would suggest the use of "By" here:
Shouldn't
Good old Progressive Enhancement, this is the way.
One thing to note is that since the tags values will be a lot, that means high cardinality, which in the observability world (metrics in particular) can make systems easily explode.
Nice catch about being able to directly use One note though: does this design mean that if I provide my own serializer it will be used only for non-string/non-byte[] types or for any type?
Intuitive design 👍
Eh, this is delicate. I don't know if you have already thought about this, so I'll share my own exp on this. With this approach, since any reader can check the version signifier before proceeding, there will be no problems of corrupted entries: this is true. But the problem is that when upgrading a live system in the future (say from To share what has been my exp with FusionCache, what I did was use the version signifier as an additional prefix/suffix in the cache key used in the distributed cache: in this way More space consumed in the distributed cache? Yes, but only temporarily, and only IF the entire system is not updated at the same time (this will depend on each user's scenario). Of course I'm not necessarily saying this is a better design, but just exposing a different one for you to reason about (if you haven't already!).
Interesting! Haven't thought of this before.
So in the end you decided to go on with the invalidation by tag? How did you solve the problems highlighted previously. I'll re-quote here for brevity:
The example was for an hypothetical Basically, the gist of it was that it is basically impossible to do invalidation by tag(s) consistently by not really doing it but instead trying to simulate it by relying on a sort of "in-memory barrier" that will do additional checks before returning a value. That was because if the "invalidation dates" or similar would be stored in memory, and they would be wiped at the next restart of the app. Above you said "separately, the system maintains a cache of known tags and their last-invalidation-time" but still in memory? If so, the problems highlighted above still stands. Unless of course you came up with a different technique, in which case I'm really interested to discover what will be 😬
Interesting! It's something I thought about for some time but haven't ened up doing yet, so I was wodnering: what are you planning to do when the limits are crossed? Log it? Throw an exception? Skip/ignore?
Interesting, again. Never thought about this. |
Since the timestamp is stored in the cache itself, it will also be fetched from L2 and inserted to L1 on the first get after the app has been restarted. |
@andreaskromann I'm not following here, are you talking about each invalidation's timestamp? Where will it be stored? In a single cache entry for all the invalidations by tag(s)? One cache entry per tag? |
on tag expiration; I was deliberately deferring on that, but:
So yes, tag expiration will outlive process restart |
@jodydonetti I guess Marc explained it above, but yes one entry per tag. Regarding the API proposal, I was positively surprised so much of what we discussed made it into the proposal. It looks very promising. The auxiliary API for invalidations is still undefined, so it will be interesting to see. Another thing I noticed was that the concept of serving stale values with a background refresh (stale-while-revalidate) didn't make the cut. |
Stale with background refresh is highly desirable, and I'm confident it will get added later. One huge problem is the safety of the callback (vs escaping the execution path of the calling code), meaning we'll need this to be opt-in and contextual per usage. That's another reason for the |
I'll eagerly await to see which design will make it work reasonably, can't wait 😬
Mmmh... that's why I asked: it's not the first time I played with such approach (and others) but they never really worked, at least not in a reasonable way. Tags are stored inside each entry, sure, but when a user asks for an entry it does it by cache key, and that is the only thing we know upfront. So what happens is, with a concrete example for a get:
On top of this, should each "invalidation tag entry" go through the same cycle as any normal key? Eh, that's not necessarily so immediate to answer, and a fun one. Anyway, at this point I think it's better if I just stop here and wait for the design/api surface to get out, so I can reason on something concrete and don't waste your time in speculations or spoilers. |
|
|
Next step: address the feedback and put a clean copy of the API in a comment and/or PR. We can finalize over email/GH. |
merged: #55084 |
actually, I need to check the normal "who closes, when" - reopening |
@jodydonetti @joegoldman2 you both raised the |
API Approved (offline)! |
@mgravell and I know that you almost answered my notes some time ago but then the mobile app discarded your answer, and I see that this has been already closed, but you think you'll be able to find some time to type them again? ps: of course the parts related to tag invalidation are already answered in the other issue, which I'll answer to in a moment. |
That API wasn't considered complete enough in terms of understanding use-cases. At this point it will not be present in .NET 9, but yes we are happy to consider it in .NET 10 if we can understand the scenarios, and yes the |
Apologies if this is covered, It would be great to have an efficient way to 'GetOrSet' in the cache a batch of items, and then invoke a factory (in a batch) on the missing items; i.e. var allItems = await cache.GetOrSetManyAsync(allItemKeys, factory: items => {
// where items is the subset of items which were not in the cache
}); |
Hi. I have a case where I just need to check if something exists in cache, but not add it if it doesn't. For example:
GetOrCreateAsync method will always create new entry right ? What if I just return null from the factory ? Will that work. Deleting item after every check seems a bit clumsy in my opinion. Thanks, |
There are optional flags that allow individual actions in the pipe to be suppressed - allowing for L1, L2 and the underlying data source to be suppressed independently. However! I think in this case manifesting a state that itself holds the downlevel status (allowed, blocked, etc) might be preferable. |
@mgravell Thanks. I assume it's |
@mgravell Haven't found this scenario documented anywhere. Will L2 cache hit add to L1? |
@mgravell how to should we handle this ?. This is real scenario where backend (L2) might be down hence application should still continue to work with local cache ( L1). Can you share some sample code ? |
Just a question? How critical is cache for your system. I have seen system, where it starts to become very slow when cache fails. One reason could be we are using a database that resides in some other cloud or on premise. In those cases it is better the system itself fails when the cache is not there. In fact we have implemented health check such that if the backing cacje service is not availble it will report the system as failure |
Another approach is to temporarily reuse the expired value in case the factory is taking too long, and let it complete in the background. You'll get the best of both worlds: always fast response + updated cached values as soon as possible. |
In our case, we rather be slow for some time, than be completely down. |
Yeah I get that. May be this should be configurable option |
Is there a way to gracefully escape from The following wrapper is a combination of my "escaping" need and the "better the slow way than no way" from above.
|
Since this is a drop in replacement is this the preferred caching library moving forward? Will this cause |
Hi, not a member of the team, but still:
I'll let the team give an official answer, but for me I would say yes: you can use HybridCache is not just an implementation though, it's also an abstraction: in fact This means other 3rd party OSS developers may implement a different one with different features: one of them is FusionCache (shameless plug 😅), you can read more about it here.
Nope, as they are the underlying building blocks.
I'll leave an official answer to the team. Hope this helps. |
Jody is right; they "get it".
One answer here is: if you're a backend library author, providing the IDistributedCache backend for YourNewCacheServiceTM. For compatibility reasons, we also won't stop you using the backend abstraction - we wouldn't choose to break existing code. More: a question came in a few days ago as to whether HybridCache should provide an IDistributedCache implementation, to add L1 into suitable pre-existing code, but honestly if you want that: my preference would be to migrate to HybridCache. |
Excited for this! I don't see any mention of a backplane for events such as memory cache eviction notices - will that be supported somehow? As users of Jody's FusionCache, it is an amazing feature. |
As a library developer with libraries that depends on and consume |
What's the easiest way of unit testing a class that uses the I guess I could create a service collection and use |
That. If you don't register an |
@damianh |
Can we indicate only lcoal or distributed when set value? not all cache need to be distributed even a IDistributedCache has been registered. |
Yes, there is a |
cool and thanks for replay 👍 |
Hello, check the flags property of the entry options:
https://source.dot.net/#Microsoft.Extensions.Caching.Abstractions/Hybrid/HybridCacheEntryFlags.cs,0796b099317330da
SongOfYouth ***@***.***> ezt írta (időpont: 2024. okt. 31.,
Cs 2:38):
… Can we indicate only lcoal or distributed when set value? not all cache
need to be distributed even a IDistributedCache has been registered.
—
Reply to this email directly, view it on GitHub
<#54647 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAL7DERWVTIUYUGJBN67OSLZ6GC2DAVCNFSM6AAAAABE7YFXCKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINBYHAZTEMBYGI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
The key not be saved to garnet when registered builder.Services.AddHybridCache(options =>
{
options.MaximumPayloadBytes = 1024 * 1024;
options.MaximumKeyLength = 1024;
options.DefaultEntryOptions = new HybridCacheEntryOptions
{
Expiration = TimeSpan.FromMinutes(5),
LocalCacheExpiration = TimeSpan.FromMinutes(5)
};
}); builder.Services.AddStackExchangeRedisCache(options =>
{
options.Configuration =
builder.Configuration.GetConnectionString("RedisConnectionString");
}); when i call the |
@SongOfYouth Are you using the V9 (preview) version of the Microsoft.Extensions.Caching.StackExchangeRedis library? (the V8 version doesn't work with Garnet, but the V9 version does) |
You are right, i tried the pre version and i works. |
May be it is necessary to provide a cache.GetAsync(key, typeof(MyType)<,...>); because i don't know the real type when use it to AOP caching. |
This is the API proposal related to Epic: IDistributedCache updates in .NET 9
Hybrid Cache specification
Hybrid Cache is a new API designed to build on top of the existing
Microsoft.Extensions.Caching.Distributed.IDistributedCache
API, to fill multiple functional gaps in the usability of theIDistributedCache
API,including:
Overview
The primary API is a new
abstract
class,HybridCache
, in a newMicrosoft.Extensions.Caching.Distributed
package:This type acts as the primary API that users will interact with for caching using this feature, replacing
IDistributedCache
(which now becomes a backend API); the purpose ofHybridCache
is to encapsulate the state required to implement new functionality. This required additional state means that the feature cannot be implemented simply as extension methods
on top of
IDistributedCache
- for example for stampede protection we need to track a bucket of in-flight operations so that we can join existing backend operations. Every feature listed(except perhaps for the pass-thru API usage) requires some state or additional service.
Microsoft will provide a concrete implementation of
HybridCache
via dependency injection, but it is explicitly intended that the API can be implemented independently if desired.Why "Hybrid Cache"?
This name seems to capture the multiple roles being fulfilled by the cache implementation. A number of otions have been considered, including "read thru cache",
"advanced cache", "distributed cache 2"; this seems to work, though.
Why not
IHybridCache
?providing this at the definition level halves this aspect of the API surface for concrete implementations, providing a consistent experince
IHybridCache
it is harder to extend than with an abstract base class thatcan implement features with default implementations that implementors can
override
as desiredIt is noted that in both cases, "default interface methods", also serve this function; if provide a mechanism to achieve this same goal with an
IHybridCache
approach.If we feel that "default interface methods" are now fully greenlit for this scenario, we could indeed use an
IHybridCache
approach.Registering and configuring
HybridCache
Registering hybrid cache is performed by
HybridCacheServiceExtensions
:where
IHybridCacheBuilder
here functions purely as a wrapper (via.Services
) to provide contextual API services to configure related services such as serialization,for API discoverability, for example making it trivial to configure serialization, rather than having to magically know about the existence of specific services that
can be added to influence behaviour. The return value is the same input services collection, for chaining purposes.
The
HybridCacheOptions
provides additional global options for the cache, including payload max quota and a default cache configuration (primarily: lifetime).The user will often also wish to register an out-of-process
IDistributedCache
backend (Redis, SQL Server, etc) in the usual manner, asdiscussed here. Note that this is not required; it is anticipated that simply having
the L1 cache with stampede protection against the backend provides compelling value. Options specific to the chosen
IDistributedCache
backend willbe configured as part of that
IDistributedCache
registration, and are not considered here.Using
HybridCache
The
HybridCache
instance will be dependency-injected into code that requires them; from there, the primary API isGetOrCreateAsync
which providesa stateless and stateful overload pair:
It should be noted that all APIs are designed as
async
, withValueTask<T>
used to respect that values may be availablesynchronously (in the cache-hit case); however, the fact that we're caching means we can reasonably assume this operation
will be non-trivial, and possibly one or both of an an out-of-process backend store call (with non-trivial payload) and an underlying data fetch (with non-trivial total time);
async
is strongly desirable.The simplest use-case is the stateless option, typically used with a lambda callback using "captured" state, for example:
The
GetOrCreateAsync
name is chosen for parity withIMemoryCache
; ittakes a
string
key, and a callback that is used to fetch the underlying data if it is not available in any other cache. In some high throughput scenarios, it may bepreferable to avoid this capture overhead using a
static
callback and the stateful overload:Optionally, this API allows:
HybridCacheEntryOptions
, controlling the duration of the cache entry (see below)For the options, timeout is only described in relative terms:
The
Flags
also allow features such as specific caching tiers or compression to be electively disabled on a per-scenario basis. It will directed thatentry options should usually be shared (
static readonly
) and reused on a per-scenario basis. To this end, the type is immutable. If nooptions
is supplied,the default from
HybridCacheOptions
is used; this has an implied "reasonable" default timeout (low minutes, probably) in the eventuality that none is specified.In many cases,
GetOrCreateAsync
is the only API needed, but additionally,HybridCache
has auxiliary APIs:These APIs provide for explicit manual fetch/assignment, and for explicit invalidation at the
key
ortag
level.The
HybridCacheEntry
type is used only to encapsulate return state forGetAsync
; anull
response indicatesa cache-miss.
Backend services
To provide the enhanced capabilities, some new additional services are required;
IDistributedCache
has both performance and feature limitations that make it incomplete for this purpose. Forout-of-process caches, the
byte[]
nature ofIDistributedCache
makes for allocation concerns, so a new API is optionally supported, based on similar work for Output Cache; however, the system functions without demanding it and allpre-existing
IDistributedCache
implementations will continue to work. The system will type-test for the new capability:If the
IDistributedCache
service injected also implements this optional API, these buffer-based overloads will be used in preference to thebyte[]
API. We will absorb the workto implement this API efficiently in the Redis implementation, and advice on others.
This feature has been prototyped using a FASTER backend cache implementation; it works very well:
(the top half of the table uses
IDistributedCache
; the bottom half usesIBufferDistributedCache
, and assumes the caller will utilize pooling etc, which hybrid cache: will)Similarly, invalidation (at the
key
andtag
level) will be implemented via an optional auxiliary service; however this API is still in design and is not discussed here.It is anticipated that cache hit/miss/etc usage metrics will be reported via normal profiling APIs. By default this
will be global, but by enabling
HybridCacheOptions.ReportTagMetrics
, per-tag reporting will be enabled.Serializer configuration
By default, the system will "just work", with defaults:
string
will be treated as UTF-8 bytesbyte[]
will be treated as raw bytesSystem.Text.Json
, as a reasonable in-box experienceHowever, it is important to be able to configure other serializers. Towards this, two serialization APIs are proposed:
With this API, serializers can be configured at both granular and coarse levels using the
WithSerializer
andWithSerializerFactory
APIs at registration;for any
T
, if aIHybridCacheSerializer<T>
is known, it will be used as the serializer. Otherwise, the set ofIHybridCacheSerializerFactory
entrieswill be enumerated; the last (i.e. most recently added/overridden) factory that returns
true
and provides aserializer
: wins (this value may be cached),with that
serializer
being used. This allows, for example, a protobuf-net serializer implementation to detect types marked[ProtoContract]
, orthe use of
Newtonsoft.Json
to replaceSystem.Text.Json
.Binary payload implementation
The payload sent to
IDistributedCache
is not simply the raw buffer data; it also contains header metadata, to include:1
:All times are managed via
TimeProvider
. Upon fetching an entry from the cache, the expiration is compared using the current time;expired entries are discarded as though they had not been received (this avoids a problem with time skew between in-process
and out-of-process stores, although out-of-process stores are still free to actively expire items).
Separately, the system maintains a cache of known tags and their last-invalidation-time (in absolute terms); if a cache entry has any tag that has a
last-invalidation-time after the creation time of the cache entry, then it is discarded as though it had not been received. This
effectively implements "tag" expiration without requiring that a backend is itself capable of categorized/"tagged" deletes (this feature is
not efficient or effective to implement in Redis, for example).
Additional implementation notes and assumptions
null``, non-empty
string` valuesIDistributedCache
with mismatches logged and the entry discarded)
<T>
) what they are requesting; if this is incorrect forthe received data, an error may occur
IDistributedCache
registration, and the backend store is secure from tampering and exfiltration. Specifically: the data will not be additionally encrypted
foo
andFOO
are separate;a-b
anda%2Db
are separate, etc; if the data retrievedhas a non-matching
key
, it will be logged and discardedstring
comparer and will apply safe logic; it will not be possible to specify a custom comparerwhere n := number of chars in the key and m := number of bytes in the value
IDistributedCache
is not explicitly documented (it is an implementation detail), and should be treated as an opaque BLOB<Foo>
and<Bar>
(different types) with the same cache key: the behaviour is undefined (it may or may not error, depending on the serializer and type compatibility); likewise, if a type is heavily refactored (i.e. in a way that impacts serializer compatibility) without changing the cache key: the behaviour is undefinedThe text was updated successfully, but these errors were encountered: