Skip to content

Commit

Permalink
Docs
Browse files Browse the repository at this point in the history
  • Loading branch information
jodydonetti committed Jun 8, 2024
1 parent 42c7ce6 commit 7c05461
Show file tree
Hide file tree
Showing 8 changed files with 110 additions and 2 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,7 @@ These are the **key features** of FusionCache:
- [**🔃 Dependency Injection + Builder**](docs/DependencyInjection.md): native support for Dependency Injection, with a nice fluent interface including a Builder support
- [**📛 Named Caches**](docs/NamedCaches.md): easily work with multiple named caches, even if differently configured
- [**🔭 OpenTelemetry**](docs/OpenTelemetry.md): native observability support via OpenTelemetry
- [**🚀 Background Distributed Operations**](docs/BackgroundDistributedOperations.md): distributed operations can easily be executed in the background, safely, for better performance
- [**📜 Logging**](docs/Logging.md): comprehensive, structured and customizable, via the standard `ILogger` interface
- [**💫 Fully sync/async**](docs/CoreMethods.md): native support for both the synchronous and asynchronous programming model
- [**📞 Events**](docs/Events.md): a comprehensive set of events, both at a high level and at lower levels (memory/distributed)
Expand Down
96 changes: 96 additions & 0 deletions docs/BackgroundDistributedOperations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
<div align="center">

![FusionCache logo](logo-128x128.png)

</div>

# 🚀 Background Distributed Operations

| ⚡ TL;DR (quick version) |
| -------- |
| FusionCache can execute most distributed operations in the background, to avoid having to wait for them to finish, and thanks to Auto-Recovery, any transient error will be automatically managed for us. |

When we scale horizontally we go multi-node, and then we go multi-none we have to introduce distributed components, and when we talk about distributed components we are talking about the **distributed cache** and the **backplane**.

These components can help us scale our infrastructure horizontally, by distributing the load on multiple nodes instead of scaling vertically by buying more powerful servers.

## 🐌 Do More, Wait More

This is all good, but as we know from the [Fallacies Of Distributed Computing](https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing) there are also potential drawbacks like bandwidth is not infinite, the network is not always reliable and so on. Even when all is normally good, there's still the fact that latency is not zero and in general any operation has a cost, not just in memory allocated but in time spent and this, in turn, means that our method calls end up being **slower**.

Now, of course the extra time spent on the distributed cache and backplane operations is spent well since it allows multiple nodes to work all together in harmony and obtain a synchronized cache as a whole.

Wouldn't it be nice if, as they say, we could _"have our cake and eat it too"_?

It turns out that yes, yes we can!

## ⚡ Do More, Wait Less!

It's possible to enable background execution of _most_ distributed operations, meaning distributed cache operations and backplane operations, with 2 simple options:
- `AllowBackgroundDistributedCacheOperations`
- `AllowBackgroundBackplaneOperations`

As the name implies, these two options allow the execution of _most_ distributed cache and backplane operations to be in the background.

This means that our `GetOrSet()` calls can end sooner and take less time, making our apps and services even faster.

Since they are _entry options_, we can set them granularly per-call or, if we want, set them in the `DefaultEntryOptions` so that they will be applied by default to any call (and, as always, we can override them in some method calls if we need to).

Pretty easy, huh 😏 ?

Finally, we should also keep in mind that the distributed operations are basically a 2-step process: FusionCache updates the distributed cache and, only when that succeeded, it can notify other nodes via the backplane. This is because if it would notify the other nodes of an updated value before such value has been saved into the distributed cache, the other nodes would read the old one, making the entire workflow useless.

> [!NOTE]
> Since backplane notifications make sense only **after** the distributed cache has been updated, it does not make any sense to set `AllowBackgroundDistributedCacheOperations` to `true` and `AllowBackgroundBackplaneOperations` to `false`. We can't avoid waiting the first step but wait the step after that, right?
## 👩‍💻 Examples

Now, to have an idea of what this means let's look at the following 3 examples:

![Various Distributed Operations Execution Modes](images/background-distributed-operations.png)

For all 3 examples let's suppose `100 ms` to execute the factory, `1 ms` to save the result in the memory cache, `20 ms` to save the result in the distributed cache and another `20 ms` to notify the other nodes via the backplane (all numbers are, of course, simple and nicely rounded examples).

### Example 1 (left)

This is what happens with both options set to `false`: FusionCache will wait for both the distributed cache operation and the backplane operation to finish before returning.

Execution time: `141 ms`.

### Example 2 (center)

This is what happens when `AllowBackgroundBackplaneOperations` is se to `true` but `AllowBackgroundDistributedCacheOperations` is set to `false`: FusionCache will wait for the distributed cache operation but, after that, it will not wait for the backplane operation.

Execution time: `121 ms`.

### Example 3 (right)

This is what happens when both options are set to `true`: FusionCache will save the result to the memory cache but, after that, will immediately return and will delegate the distributed operations to be in the background.

Execution time: `101 ms`.

## 🤔 Transient Problems

But wait, hold on: what happens if there's a transient issue while updating the distributed cache or while sending a notification on the backplane? Shouldn't they be handled in some way?

Yes, in fact they should, but not by us, and not manually!

With FusionCache we can count on [Auto-Recovery](AutoRecovery.md) to take care of all these issues, so that we can get the best of both worlds: faster execution + automatic handling of transient errors, without having to do anything more than just calling our trusted `GetOrSet()` or `Set()` or `Remove()` calls (any method is supported of course).

That's all 🥳

## 👍 Reasonable Defaults

Currently the defaults are:
- `AllowBackgroundDistributedCacheOperations`: `false`
- `AllowBackgroundBackplaneOperations`: `true`

Why is that?

The idea is that frequently in a multi-node scenario multiple sequential calls from the same user may go through different nodes, and frequently not all nodes have the same cached value locally (in the memroy cache, L1): this means that if we wait at least the distributed cache operation after an update, even if the next call would go through a different node where the value is not cached, it will be automatically taken from the distributed cache, with the new value. Backplane operations are also usually way faster than a distributed cache save, and this means that it's very unlikely that the system can observe an unsynchronized state.

In general, these defaults strike a good balance overall, all things considered.

Having said that, if we want an even stricter insurance about full blown synchronization of the global cached state, we can set them both to `false`. If, on the other hand, we can accept the opposite, we can set them both to `true`.

Personally I set them both to `true` a lot of times, but that wholly depends on the specifics of the scenario we are working it and its constraints.
1 change: 1 addition & 0 deletions docs/Backplane.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ As an example, let's look at the flow of a `GetOrSet` operation with 3 nodes (`N

As we can see we didn't have to do anything more than usual: everything else is done automatically for us.

Finally we can even execute the backplane operations in the background, to make things even faster: we can read more on the related [docs page](BackgroundDistributedOperations.md).

## 📩 Notifications: then what?

Expand Down
5 changes: 3 additions & 2 deletions docs/CacheLevels.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,8 +57,9 @@ In the end this basically it boils down to 2 possible ways:

Of course in both cases you will also have at your disposal the added ability to enable extra features, like [fail-safe](FailSafe.md), advanced [timeouts](Timeouts.md) and so on.

Finally, if needed, we can also use a different `Duration` specific for the distributed cache via the `DistributedCacheDuration` option: in this way updates to the distributed cache can be picked up more frequently, in case we don't want to use a [backplane](Backplane.md) for some reason.
Also, if needed, we can use a different `Duration` specific for the distributed cache via the `DistributedCacheDuration` option: in this way updates to the distributed cache can be picked up more frequently, in case we don't want to use a [backplane](Backplane.md) for some reason.

Finally we can even execute the distributed cache operations in the background, to make things even faster: we can read more on the related [docs page](BackgroundDistributedOperations.md).

## 📢 Backplane

Expand Down Expand Up @@ -105,7 +106,7 @@ Yes, totally, and there's a [dedicated page](DiskCache.md) to learn more.

Since the distributed cache is a distributed component (just like the backplane), most of the transient errors that may occur on it are also covered by the Auto-Recovery feature.

We can readm more on the related [docs page](AutoRecovery.md).
We can read more on the related [docs page](AutoRecovery.md).

## 📦 Packages

Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 4 additions & 0 deletions src/ZiggyCreatures.FusionCache/FusionCacheEntryOptions.cs
Original file line number Diff line number Diff line change
Expand Up @@ -216,6 +216,8 @@ public float? EagerRefreshThreshold
/// <strong>TL/DR:</strong> set this flag to <see langword="true"/> for a perf boost, but watch out for rare side effects.
/// <br/><br/>
/// <strong>DOCS:</strong> <see href="https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/CacheLevels.md"/>
/// <br/><br/>
/// <strong>DOCS:</strong> <see href="https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/BackgroundDistributedOperations.md"/>
/// </summary>
public bool AllowBackgroundDistributedCacheOperations { get; set; }

Expand Down Expand Up @@ -267,6 +269,8 @@ public bool EnableBackplaneNotifications
/// <strong>TL/DR:</strong> if you want to wait for backplane operations to complete, set this flag to <see langword="false"/>.
/// <br/><br/>
/// <strong>DOCS:</strong> <see href="https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/Backplane.md"/>
/// <br/><br/>
/// <strong>DOCS:</strong> <see href="https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/BackgroundDistributedOperations.md"/>
/// </summary>
public bool AllowBackgroundBackplaneOperations { get; set; }

Expand Down
4 changes: 4 additions & 0 deletions src/ZiggyCreatures.FusionCache/ZiggyCreatures.FusionCache.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions src/ZiggyCreatures.FusionCache/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ These are the **key features** of FusionCache:
- [**🔃 Dependency Injection + Builder**](https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/DependencyInjection.md): native support for Dependency Injection, with a nice fluent interface including a Builder support
- [**📛 Named Caches**](https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/NamedCaches.md): easily work with multiple named caches, even if differently configured
- [**🔭 OpenTelemetry**](https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/OpenTelemetry.md): native observability support via OpenTelemetry
- [**🚀 Background Distributed Operations**](https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/BackgroundDistributedOperations.md): distributed operations can easily be executed in the background, safely, for better performance
- [**📜 Logging**](https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/Logging.md): comprehensive, structured and customizable, via the standard `ILogger` interface
- [**💫 Fully sync/async**](https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/CoreMethods.md): native support for both the synchronous and asynchronous programming model
- [**📞 Events**](https://github.com/ZiggyCreatures/FusionCache/blob/main/docs/Events.md): a comprehensive set of events, both at a high level and at lower levels (memory/distributed)
Expand Down

0 comments on commit 7c05461

Please sign in to comment.