Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New memory cache implementation #48567

Open
davidfowl opened this issue Feb 21, 2021 · 64 comments
Open

New memory cache implementation #48567

davidfowl opened this issue Feb 21, 2021 · 64 comments
Labels
api-suggestion Early API idea and discussion, it is NOT ready for implementation area-Extensions-Caching
Milestone

Comments

@davidfowl
Copy link
Member

davidfowl commented Feb 21, 2021

Background and Motivation

The MemoryCache implementation and interface leaves much to be desired. At the core of it, we ideally want to expose something more akin to ConcurrentDictionary<TKey, TValue> that supports expiration and can handle memory pressure. What we have right now has issues:

Proposed API

The APIs are still TBD but I'm thinking a generic memory cache.

namespace Microsoft.Extensons.Caching
{
    public class MemoryCache<TKey, TValue>
    {
        public TValue this[TKey key] { get; set; }
        public bool IsEmpty { get; }
        public int Count { get; }
        public ICollection<TKey> Keys { get; }
        public ICollection<TValue> Values { get; }
        public void Clear();
        public bool ContainsKey(TKey key);
        public IEnumerator<KeyValuePair<TKey, TValue>> GetEnumerator();
        public KeyValuePair<TKey, TValue>[] ToArray();

        public bool TryAdd(TKey key, CacheEntry<TValue> value);
        public bool TryGetValue(TKey key, [MaybeNullWhen(false)] out TValue value);
        public bool TryRemove(TKey key, [MaybeNullWhen(false)] out TValue value);
    }

    public class CacheEntry<TValue>
    {
        TValue Value { get; set; }
        DateTimeOffset? AbsoluteExpiration { get; set; }
        TimeSpan? AbsoluteExpirationRelativeToNow { get; set; }
        TimeSpan? SlidingExpiration { get; set; }
        IList<IChangeToken> ExpirationTokens { get; }
        IList<PostEvictionCallbackRegistration> PostEvictionCallbacks { get; }
        CacheItemPriority Priority { get; set; }
        long? Size { get; set; }
    }
}

I'm convinced now that this shouldn't be an interface or an abstract class but I'm open to discussion.

Usage Examples

TBD

Alternative Designs

TBD

Risks

Having 3 implementations.

cc @Tratcher @JunTaoLuo @maryamariyan @eerhardt

@davidfowl davidfowl added the api-suggestion Early API idea and discussion, it is NOT ready for implementation label Feb 21, 2021
@dotnet-issue-labeler dotnet-issue-labeler bot added area-Extensions-Caching untriaged New issue has not been triaged by the area owner labels Feb 21, 2021
@ghost
Copy link

ghost commented Feb 21, 2021

Tagging subscribers to this area: @eerhardt, @maryamariyan, @michaelgsharp
See info in area-owners.md if you want to be subscribed.

Issue Details

Background and Motivation

The MemoryCache implementation and interface leaves much to be desired. At the core of it, we ideally want to expose something more akin to ConcurrentDictionary<TKey, TValue> that supports expiration and can handle memory pressure. What we have right now has issues:

Proposed API

The APIs are still TBD but I'm thinking a generic memory cache.

namespace Microsoft.Extensons.Caching
{
    public class IMemoryCache<TKey, TValue>
    {
         // TBD
    }
}

Usage Examples

TBD

Alternative Designs

TBD

Risks

Having 2 implementations.

Author: davidfowl
Assignees: -
Labels:

api-suggestion, area-Extensions-Caching, untriaged

Milestone: -

@adamsitnik
Copy link
Member

It's non generic which means it boxes value types (keys and values)

How is it being typically used by the end-users: to cache multiple instances of the same type or different types? If we make it generic but users end up having more cache instances we might get a memory overhead bigger from just the boxing in the non-generic version.

@adamsitnik
Copy link
Member

        DateTimeOffset? AbsoluteExpiration { get; set; }
        TimeSpan? AbsoluteExpirationRelativeToNow { get; set; }
        TimeSpan? SlidingExpiration { get; set; }

Would it be possible to make these properties have setter only, to keep the internal representation of these fields an implementation detail? So we could for example translate AbsoluteExpiration and AbsoluteExpirationRelativeToNow to ticks like StackOverflow did in their implementation

        private long _absoluteExpirationTicks;
        private readonly uint _slidingSeconds;

We could also take @edevoogd experiment under the consideration: #45842 (comment)

@adamsitnik
Copy link
Member

The StackOverflow implementation also introduced an AccessCount field to the cache entry: #45592 (comment)

@adamsitnik
Copy link
Member

cc @NickCraver @mgravell

@davidfowl
Copy link
Member Author

How is it being typically used by the end-users: to cache multiple instances of the same type or different types? If we make it generic but users end up having more cache instances we might get a memory overhead bigger from just the boxing in the non-generic version.

Libraries end up with their own private caches. Apps usually have a single cache.

@AndersMad
Copy link

Wish: Pre- vs. PostEvictionCallbacks (instead of or additional)

Short: Called before eviction with an option to skip eviction / keep the cache entry.

Long: Skip could be:

  • A way to swap cached value with fresh data e.g. if hit count is high then refresh the data (from callback) instead of removing it and letting first user wait for new. Call should be async as IO is expected.
  • Smart extended expiration on the basis of hit count or a state of the current data etc.

And all topic main 👍 !.. That could reduce my code removing reflection to get the main collection etc. + make cache a lot faster with this suggestion. Think the hardest part of cache is estimating the memory. Came here to clone - hope above will be a thing ~:)

@davidfowl
Copy link
Member Author

Short: Called before eviction with an option to skip eviction / keep the cache entry.

Can you file an issue for this? With details and usage patterns etc.

@maryamariyan maryamariyan removed the untriaged New issue has not been triaged by the area owner label Mar 4, 2021
@maryamariyan maryamariyan added this to the Future milestone Mar 4, 2021
@alastairtree
Copy link

alastairtree commented Mar 15, 2021

There is a long running discussion on why I have not implemented Clear() in LazyCache over at alastairtree/LazyCache#67 which largely is due to the use of MemoryCache and it's current API. If this went ahead I suspect I would rewrite LazyCache to use MemoryCache<TKey, TValue> as David's proposal would make that much easier.

Worth considering usage scenarios - are you suggesting one cache per item-type, or one cache per app? Having TValue in the definition would encourage me to have one cache per item-type, which makes doing Clear on all cached data (a common use case) more difficult as there are many caches. Typically up to now apps usually have one cache, with the TValue know at time of Get, but that does cause the boxing issue.

@davidfowl
Copy link
Member Author

I think having lots of caches is fine (we basically have this today all over the framework). It's should be treated like a dictionary with expiration.

@JoeShook
Copy link

I would like to see cache hit and cache miss EventCounters per cache. I want to be able to capture metrics in something like InfluxDb. And filter the cache name as a Tag and the cache-hit/cache-miss as a field.

Today I am experimenting with LazyCache and adding metrics. I found with my strategy of collecting counters by cache name it becomes a dynamic process of adding counters as they are called rather than knowing the cache names ahead of time and then having to create a specific EventSource for every application. At the moment I have not reached my destination with LazyCache and many named caches, but the point is I would hope some thought would be put towards metrics or at least hooks to allow metrics to be plugged in.

Looking around the .NET ecosystem in relation to resiliency frameworks, hooks for metrics don't seem to be a consideration. Like looking at Polly, metrics are absent and not easy to add.

@jodydonetti
Copy link
Contributor

jodydonetti commented Mar 28, 2021

My 2 cents:

  • 👍 for a Clear() method, is something requested a lot by cache users

  • I've very rarely seen a cache used with only one cache-wide specific TValue. I understand the boxing concerns, but the fixed TValue to me does not have a real-world usage. On the contrary, to then support that scenario I can imagine people coming up with higher level abstractions with separate lower level caches per type, but that would make the perf concerns about boxing basically evaporate VS having multiple caches, on top of having potential problems coordinating the different caches for high level operations (like the Clear() above)

  • I agree with @adamsitnik with having only one actual field with the expiration, but have a different design proposal: instead of having set-only props (which would make reading the expiration prob impossible) we may have a DateTimeOffset? AbsoluteExpiration { get; set; } and 2 set methods SetAbsoluteExpiration(DateTimeOffset) and SetAbsoluteExpiration(TimeSpan) which would both write to the same place

  • in my experience the long? Size prop have created a lot of issues: if a SizeLimit is specified for the cache and a size is not specified for each entry an exception is thrown, which is frequently unexpected. I would either make it not throw in that case or make the Size prop non-nullable, aka just a long (also less pressure on the stack), with a default value of 1. This would make any entry without a specific size be worth 1 generic unit, which seems reasonable even when specifying a SizeLimit

One question: what would be the rationale behind the implementation of the GetOrSet/Add via an ext method instead of being a functionality baked in?

@roji
Copy link
Member

roji commented Mar 29, 2021

I've never seen a cache used with only one cache-wide specific TValue. I understand the boxing concerns, but the fixed TValue to me does not have a real-world usage.

There are ample cases where people cache objects with specific key/value types (similarly to Dictionary). EF Core uses this internally in multiple places (see #48455).

@jodydonetti
Copy link
Contributor

jodydonetti commented Mar 29, 2021

There are ample cases where people cache objects with specific key/value types (similarly to Dictionary). EF Core uses this internally in multiple places (see #48455).

I see what you are saying, but I think there is a big difference between what could be described as "a dictionary with an optional expiration logic" and "an application cache", and maybe we are trying to squeeze 2 different concepts into the same abstraction, which may be problematic.

I can see a partial convergence in the feature set, but imho they should remain separate.

Maybe the right thing would be to have 2 different types?

@davidfowl
Copy link
Member Author

I agree with @adamsitnik with having only one actual field with the expiration, but have a different design proposal: instead of having set-only props (which would make reading the expiration prob impossible) we may have a DateTimeOffset? AbsoluteExpiration { get; set; } and 2 set methods SetAbsoluteExpiration(DateTimeOffset) and SetAbsoluteExpiration(TimeSpan) which would both write to the same place

Seems fine.

in my experience the long? Size prop have created a lot of issues: if a SizeLimit is specified for the cache and a size is not specified for each entry an exception is thrown, which is frequently unexpected. I would either make it not throw in that case or make the Size prop non-nullable, aka just a long (also less pressure on the stack), with a default value of 1. This would make any entry without a specific size be worth 1 generic unit, which seems reasonable even when specifying a SizeLimit

We may remove it on this generic cache and instead expose a specialized cache for strings and bytes where the count actually works.

One question: what would be the rationale behind the implementation of the GetOrSet/Add via an ext method instead of being a functionality baked in?

GetOrAdd can be built in but right now our implementation doesn't work well because we're missing the right primitives on the IMemoryCache to implement it.

@NickCraver
Copy link
Member

Would we consider an AccessCount on CacheEntry<TValue> here? This has been essential in all our use cases in eliminating a lot of very low (and often zero) usage cache members resulting in less GC work and memory usage peaks. IMO, it's something fantastic to have at scale and the overhead wouldn't matter if you weren't at scale (esp. relative to the current interface-based cache primitives).

Overall, we found that there is a huge barrier to taking and analyzing a memory dump at scale. However, if we can expose this information in the app with a few tweaks like this it becomes immensely more useful and actionable by far more people.

Fram a practical standpoint, we went from doing memory dumps and using ClrMd in LINQPad or other bits to having views like this live in production any time:

Screen Shot 2021-04-04 at 8 22 45 AM

Screen Shot 2021-04-04 at 8 23 12 AM

Screen Shot 2021-04-04 at 8 23 37 AM

Screen Shot 2021-04-04 at 8 24 04 AM

Screen Shot 2021-04-04 at 8 24 27 AM

(apologies for the bad developer UX, but you can see the info!)

If anyone's curious about the last one - it's the death by a thousand papercuts, we've found it very useful to just crawl the cache and dump some random entries to see any surprises. The * are also configured patterns in code of anything common we know about, basically collapsing anything with a list of known prefixes there but really any name translation and collapse could happen far outside the primitives as long as we could enumerate the cache.

Ideally, whatever primitives we move to here would allow such enumeration without that high "take a memory dump" bar. At current, it doesn't seem possible because:

  1. GetEnumerator() returns the value directly and so in a full enumeration the CacheEntry<TValue> and any properties wouldn't be accessible.
  2. CacheEntry<TValue> doesn't have an access/usage count.

@danmoseley
Copy link
Member

How would users make sense of which of the many memory caches they should use? Are neither of the existing cache APIs redeemable?

@davidfowl
Copy link
Member Author

How would users make sense of which of the many memory caches they should use?

I don't think this matters. The whole point of this implementation is to be a lightweight concurrent dictionary with expiration that doesn't try to respond to memory pressure and start kicking out entries. That's the only reason I see to be concerned about having a single cache. The for it to have a global process wide view of the state. That is already not the case today with .NET Core. Applications have different caches with different policies. I think this is no different.

Are neither of the existing cache APIs redeemable?

I don't think so, but I haven't given it a fair chance.

@jodydonetti
Copy link
Contributor

jodydonetti commented Apr 5, 2021

The whole point of this implementation is to be a lightweight concurrent dictionary with expiration that doesn't try to respond to memory pressure and start kicking out entries.

That is what I was talking about: being for a single TKey + TValue type I see this new type as an addition to our collections toolbelt - just like the new PriorityQueue - and not as a new "cache", where cache is typically meant imho as an application-wide one, with a non-fixed TValue. Also, not kicking out entries automatically maybe means no need for both the Priority and Size + SizeLimit part.

The use case is surely interesting, and I for one would like to have that at my disposal, but calling it the "new memory cache implementation" would be potentially misleading: maybe putting it in the System.Collections.Concurrent namespace and naming it something like ConcurrentExpirableDictionary or ConcurrentDictionaryWithExpiration or simply ExpirableDictionary would be more aligned with what is already there and users expectations?

@davidfowl
Copy link
Member Author

davidfowl commented Apr 5, 2021

FYI the current IMemoryCache does the same. To be clear, we already end up with multiple in the app that each have they own policies for eviction.

I hear you in the size limit and generics though. The generics especially encourage more caches in the app itself and might cause confusion in some cases.

@roji
Copy link
Member

roji commented Apr 6, 2021

FWIW in the EF Core usage, query compilation artifacts are being cached (rather than e.g. remote data). So expiration definitely isn't what we're looking for (artifacts remain valid forever), but rather a decent LRU which guarantees that memory consumption is capped in case there's an explosion of queries. Not saying this has to be the same type/problem being discussed here, though.

@jodydonetti
Copy link
Contributor

jodydonetti commented Jun 9, 2021

@jodydonetti I think using a cache in the first form is really rare , normally you would just use a dictionary (ToDictionary) or hashset. sometimes a concurrent dictiionary. esp since the add normally require long lived life times and external IO.

Exactly, you would use something lower level (like a dict, etc) "to cache some data" and not "as a cache service", that was my point.
But as it turns out people who need a "dict + date-based expiration" or "dict + size-based compaction" end up using the MemoryCache because it has those features, like @roji in EFCore (if I understood correctly).
This new type may fit that space, so that would be in fact not a "cache service" but a pumped up dict.

The 2nd case is nearly all use cases and in this case whethere its IOC ( note NOT DI ) or not is irrelevant a console app , azure / lamda function or micro service with a global cache is the same as a larger app managed by an IOC container. The key thing both are app lifetime scoped .

Yep, I agree. And since in those use cases we are talking about something that is app-lifetime-scoped and shared (eg: typically singleton), that would be a "cache service" shared accross the entire app, and that means a single TValue is probably not what's needed, at least to me.

Again, I think the main point of confusion and the big difference is between "a low level component to cache some data that can expire/be evicted in some way" (eg: a smarter dict or similar) and "a shared app-wide cache service". Both are useful, just different in scope and design.

If I misunderstood you let me know.

@paulomorgado
Copy link
Contributor

@bklooste,

strings are not normally big compared to value and if you want tiny objects as a consumer your better of using the hash as a key .

It's mostly not about the length of the strings but the number of times you have to compute them and allocate them. That's CPU comsumtion to compute the key and memory and GC work to allocate/deallocate them.

I have some implimentations use the hash internally but its best done by the caller.

Beware that different strings might have the same hash code.

@vitek-karas
Copy link
Member

Given that memory pressure based eviction seems to be an explicit non-goal here, I guess unloading is also a non-goal. But I wanted to make sure it's considered. It's another example of an eviction policy, but one which should be typically combined with some other eviction policy and almost never used on its own.

What I mean by unloading is making sure that all entries which have references to an AssemblyLoadContext which is being unloaded are evicted (the trigger would be the AssemblyLoadContext.Unloading event in this case).

I'm sure it's possible to build this on top, but then it becomes a question of how to make all the libraries in the app implement these additional eviction policies so that the app works as a whole (and for example can successfully unload a plugin).

@NickCraver
Copy link
Member

@NickCraver, doesn't having a single cache, usually with a string key, cause the creation of lots of strings just for indexing the cache?

Yep, for lack of a better option generally. It's also very fast though, for example you could cache based on the hash of a tuple or something, but since that would be on fetch there is a computational cost to it. Is that cheaper than the GC cost? I'm not sure, but it's an interesting experiment to try at scale. Most alternatives I'm aware of have similar risks of hash collision if that's your lookup model, though. I'm all ears for better options - strings are simple, easy to understand, and fairly cheap to compute (though you deal with cost later) - that's not an argument for them over other things, it just places them high on the usability tradeoff scale today.

@NickCraver
Copy link
Member

For what it's worth, I always thought a model that allow caching via a tuple with minimum allocations might be interesting, but I'm not sure how we address the collision issue or exactly what the interface for such could look like in the current type system. If you had n caches that approach works, but becomes a n caches management problem (at least, we'd have hundreds of caches at a minimum).

If we could have a cache that internally was exposed like (these arguments being a completely arbitrary example):

public MyCachedType Get(int userId, int categoryId)
{
    var key = ("SomeUniqueKey", userId, category); // Tuple here for cache key
    // ...something...
}

Today, we'd do something like:

public MyCachedType Get(int userId, int categoryId)
{
    var key = $"SomeUniquePattern:{userId.ToString()}:{categoryId.ToString()}";
    // ...something...
}

(so that it's using string.Concat in the end)

...anyway, I think that'd potentially be useful, but would want a shared cache for such a variant key pattern. The return type from cache would likely be object and casted still, as we do today.

@ericsampson
Copy link

It seems to me like there are two quite different usecases that people are looking for, and that a) trying to address both in one design might lead to a compromise, and b) the current discussion/proposal seems to lean more heavily towards the "ConcurrentDictionary with expiration, caching objects with key/value types, N caches" scenario.
The other scenario being more the "application cache, one per app, key is often a string currently but maybe a tuple could work well, often used to cache things like API bearer tokens"

Should there be a separate proposal/interface for the latter application-cache usecase? To prevent the design from getting muddled between the two different needs.
For instance, in this case as David points out by referring to [https://github.com//issues/36499], people get bit by not realizing the fact that the factory in GetOrCreate* is not re-entrant-proof, which matters when you're calling an external API or token endpoint that takes N seconds to respond.
Or another angle to consider for this usecase is if it would be worthwhile to build in the "serve-stale" functionality that Nick Craver mentioned, as an extension if desired.

I just feel like the two usecases are sufficiently different that trying to address both in one API might make things messy.

Cheers!

@Turnerj
Copy link

Turnerj commented Jun 26, 2021

One thing to consider is how much should be in the runtime as default. There are several caching libraries in the .NET ecosystem (FusionCache, CacheManager, Cache Tower, EasyCaching, LazyCache, MonkeyCache, and probably a bunch of others) which can handle the more complicated and feature rich scenarios.

I am biased as the creator of one of those caching libraries but my view is that it is the simple/common scenario that the MemoryCache implementation should aim for - the ConcurrentDictionary with expiration scenario. It should be fast, it should be straight forward and a lot of people should use it, it just shouldn't be all-encompassing.

@zawor
Copy link

zawor commented Jun 26, 2021

@ericsampson I thnik you nailed here the core of the problem

While from my standpoint, 99% of the cases would be solved by ConcurrentDictionary with expiration as they mainly revolve around small services where I simply don't want to hammer one particular resource too much with as small overhead as it can be.

On the other hand usecase of Nick gives me meme vibes ;)
image

@ericsampson
Copy link

ericsampson commented Jun 26, 2021

One thing to consider is how much should be in the runtime as default. There are several caching libraries in the .NET ecosystem (FusionCache, CacheManager, Cache Tower, EasyCaching, LazyCache, MonkeyCache, and probably a bunch of others) which can handle the more complicated and feature rich scenarios.

That's a fair point :) If the .NET/ASP docs for caching can list these community packages, that would go a long way.

I am biased as the creator of one of those caching libraries but my view is that it is the simple/common scenario that the MemoryCache implementation should aim for - the ConcurrentDictionary with expiration scenario. It should be fast, it should be straight forward and a lot of people should use it, it just shouldn't be all-encompassing.

"straight forward" is an important qualifier :)
Maybe the theoretical MemoryCache extension library could have
GetOrAddLazy*
alongside the current re-entrant factory version, to help discoverability in the IDE etc. Because that's the biggest current footgun for people IME. Cheers

@ohroy
Copy link

ohroy commented Jul 23, 2021

+1
current cache is hard to use, the cache stampede is very serious problem

@davidfowl
Copy link
Member Author

current cache is hard to use, the cache stampede is very serious problem

Its possible to work around manually.

@jodydonetti
Copy link
Contributor

current cache is hard to use, the cache stampede is very serious problem

If you'd like to avoid the cache stampede problem you can take a look at some alternatives (in alphabetical order):

Hope this helps.

@ohroy
Copy link

ohroy commented Jul 23, 2021

current cache is hard to use, the cache stampede is very serious problem

Its possible to work around manually.

sure, it can work around by lazy and lock etc, but not everyone can realize that it may have the problem of cache stampede , which will lead to serious consequences.
whatever, it is not easy to use.
I very hope to have a built-in solution , thank you for this proposal

@molinch
Copy link

molinch commented Jan 20, 2022

current cache is hard to use, the cache stampede is very serious problem

Its possible to work around manually.

I know it's a fairly broad question where implementation may depend on uses cases, but what solution would you suggest when the cache key is dynamic ? (for example when you want to cache fetched user permissions so cache key could be user-17, user-18, ...). Having a SemaphoreSlim per key seems complicated and in the long run implies a memory leak. Relying on Lazy<T> semantics may work too depending on how the current MemoryCache is implemented.

@jodydonetti
Copy link
Contributor

current cache is hard to use, the cache stampede is very serious problem

Its possible to work around manually.

I know it's a fairly broad question where implementation may depend on uses cases, but what solution would you suggest when the cache key is dynamic ? (for example when you want to cache fetched user permissions so cache key could be user-17, user-18, ...). Having a SemaphoreSlim per key seems complicated and in the long run implies a memory leak. Relying on Lazy<T> semantics may work too depending on how the current MemoryCache is implemented.

One of these #48567 (comment) ?

@molinch
Copy link

molinch commented Jan 21, 2022

Thanks @jodydonetti, since our need is simply an IMemoryCache without cache stampede issue, nothing more, nothing less. We went with @StephenCleary solution in the end: https://gist.github.com/StephenCleary/39a2cd0aa3c705a984a4dbbea8275fe9

I like this solution, it's a slim wrapper on top of IMemoryCache and you can easily follow the code.

@robertbaumann
Copy link

Consider something that can be extensible to support Azure Caching Guidance

  • ICache<TKey, TValue>: This is the base interface that would be the dependency for code
    • Has simple Add(TKey, TValue), Remove(TKey), TValue Get(TKey) methods as well as async flavors of those methods to await retrieval from a remote source
    • Concrete implementations for ICache<TKey, TValue>, e.g. MemoryCache, RedisCache
  • MemoryCache<TKey, TValue>: Wraps a ConcurrentDictionary with expiration
    • Default expiration policy for cache entries passed in during initialization

@davidfowl
Copy link
Member Author

There's a proposal for a new cache implementation here dotnet/extensions#4766. Can those interested review it and leave comments?

@jodydonetti
Copy link
Contributor

Thanks David, will do!

@austinmfb
Copy link

Interesting that there is no mention of managing cache dependencies in this thread. Not that it would have to be baked into a new cache class, but still seems like a relevant design consideration. Is nobody really using a consistent pattern for this that they want their cache class to handle? Is everyone just opting for inline code in each application component requiring this, and managing its specific set of dependent caches?

@JoelDavidLang
Copy link

ICacheEntry uses an unintuitive pattern for adding new entries

I was bitten by this just recently. I did not see any documentation in CreateEntry() nor on ICacheEntry that stated that the object needed to be disposed to commit it to the cache. As a result, the code I initially wrote didn't have working caching!

I would like to see this documentation clarified for the current version of the cache at least.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api-suggestion Early API idea and discussion, it is NOT ready for implementation area-Extensions-Caching
Projects
None yet
Development

No branches or pull requests