Skip to content

Commit

Permalink
refine
Browse files Browse the repository at this point in the history
Signed-off-by: ekexium <[email protected]>
  • Loading branch information
ekexium committed Aug 28, 2024
1 parent 4fa64e3 commit 813b856
Showing 1 changed file with 80 additions and 43 deletions.
123 changes: 80 additions & 43 deletions text/0114-resolved-ts-for-large-transactions.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,88 +6,127 @@ Tracking issue: N/A

## Background

The RFC is a variation of @zhangjinpeng87 's [Large Transactions Don't Block Watermark](https://github.com/pingcap/tiflow/blob/master/docs/design/2024-01-22-ticdc-large-txn-not-block-wm.md). They aim to solve the same problem.
The RFC is a variation and extension of @zhangjinpeng87 's [Large Transactions Don't Block Watermark](https://github.com/pingcap/tiflow/blob/master/docs/design/2024-01-22-ticdc-large-txn-not-block-wm.md). They aim to solve the same problem.

Resolved-ts is a tool for other services. It's definition is that no new commit records smaller than the resolved-ts will be observed after you observe the resolved-ts.
Resolved-ts is a mechanism that provides a temporal guarantee. It is defined as follows: Once a resolved timestamp T is observed by any component of the system, it is guaranteed that no new commit record with a commit timestamp less than or equal to T will be subsequently produced or observed by any component of the system.

In current TiKV(v8.3), large transactions can block resolve-ts from advancing, because it is calculated as `min(pd-tso, min(lock.ts))`, which is actually a more stringent constraint than its original definition. A lock from a pipelined txn can live several hours. This will make services dependent on resolved-ts unavailable.

## Goals

In current phase, our primary goal is to not let **large pipelined transactions** block the advance of resolved-ts. We focus on large pipelined transactions here. It could be adapted for general "large" transactions.
- Current phase objectives:
- Prevent pipelined transactions from impeding resolved-ts advancement.

Our ultimate goal is to achieve an unblocked resolved-ts progression. Besides long transactions and their locks, there are other factors that can block the advance of resolved-ts. We will discuss it in the last part of the proposal.

## Assumptions

We assume that the number of concurrent pipelined transactions is bounded, not exceeding 10000, for example.

This constraint is not a strict limit, but rather serves to manage resource utilization and facilitate performance evaluation. 10000 should be large enough in real world.
- Long-term goal:
- Minimize the overhead of resolved-ts maintenance.
- Maximize resolved-ts freshness.
- Achieve uninterrupted resolved-ts progression, addressing all potential blocking factors beyond long transactions and their associated locks. (Further details to be discussed in the final section of this proposal.)

## Design

The key idea is using `lock.min_commit_ts` to calculate resolved-ts instead of `lock.start_ts`.
Key Concept: Utilizing `lock.min_commit_ts` instead of `lock.start_ts` for resolved-ts calculation.

Rationale:
1. Resolved-ts definition is independent of LOCK CF.
2. Current use of `lock.start_ts` is based on the invariant: `lock.start_ts` < `lock.commit_ts`.
3. A valid resolved-ts doesn't necessitate the absence of locks with smaller start_ts, provided their future commit_ts are guaranteed to be larger.

A resolved timestamp (resolved-ts) ensures that all historical events before this point are finalized and observable. In this context, 'historical events' specifically mean write and rollback records, excluding locks in the LOCK CF. Importantly, a valid resolved-ts doesn't require the absence of earlier locks, as long as their transactions' status is determined.
Advantages of `lock.min_commit_ts`:
1. Satisfies a similar invariant: `lock.min_commit_ts` <= `lock.commit_ts`
2. Unlike the static `lock.start_ts`, `lock.min_commit_ts` can be dynamically increased.

### Maintanence of resolved-ts

Key objective: Maximize all TiKV nodes' awareness of large pipelined transactions during their lifetime, i.e. from their first writes to all locks being committed. These info are necessary:

1. start_ts
2. Recent min_commit_ts
1. `start_ts`
2. A fresh enough `min_commit_ts`
3. Status

#### Coordinator

For a large pipelined transaction, its TTL manager is responsible for fetching a latest TSO as a candidate of min_commit_ts and update both the committer's inner state and PK in TiKV. After that, it broadcast the start_ts and the new min_commit_ts to all TiKV stores. The update of PK can be done within the heartbeat request.
For a pipelined transaction, its TTL manager is responsible for fetching a latest TSO as a candidate of min_commit_ts and update both the committer's inner state and PK in TiKV. After that, it broadcasts the `start_ts` and the new `min_commit_ts` to all TiKV stores. The PK update can be piggybacked on the `heartbeat` request.

Optionally, to avoid too many RPC overhead, the broadcast messages from different transactions can be batched.

Atomic variables or locks may be needed for synchronization between the TTL manager and the committer.

#### Scaling out TiKVs

When a new TiKV instance is added to the cluster in the middle of a large transaction, its TTL manager must broadcast to it in time. TTL manager gets the list of stores from the region cache. If region cache is unaware of any newly up TiKV, TTL manager may miss it.
Challenge:
During cluster expansion, when a new TiKV instance is integrated mid-transaction, the TTL manager must promptly incorporate it into its broadcast list. The TTL manager relies on the region cache for store information. However, if the region cache lacks awareness of a newly added TiKV, the TTL manager may inadvertently omit it from broadcasts.

To mitigate this, we propose implementing an optional routine in the region cache to periodically fetch all stores.
One solution is to have an background goroutine in the region cache to periodically refresh the complete store list.

#### TiKV scheduler - heartbeat

Besides updating TTL, it can also update min_commit_ts of the PK.
Besides updating TTL, it also supports update min_commit_ts of the PK.

*TBD: should it also update max_ts?*

#### TiKV txn_status_cache

A standalone part was created for large transactions specially. The cache serves as

1. A fresh enough source of min_commit_ts info of large transactions for resolved-ts resolver
2. A fast path for read requests when they would otherwise return to coordinator to check PK's min_commit_ts.
1. Provides up-to-date `min_commit_ts` information for large transactions to the resolved-ts resolver.
1. Offers an optimized path for read requests, reducing the need to query PK for transaction status.

Cache Management Strategy:

1. Retention policy:
- Maximize retention of useful information.
- No eviction based on space constraints, leveraging the compact entry structure.
- Assumption: Limited number of concurrent large transactions.

2. TTL management:
- Implement a substantial default TTL for cache entries.
- Rationale: Minimize redundant operations when readers encounter locks from these transactions.

##### Eviction
3. Post-commit procedure:
- Upon successful commitment of all secondary locks in a large transaction:
a. Coordinator broadcasts a TTL update to all TiKV nodes.
b. Extends TTL by several seconds.
- Purpose: Allow follower peers time to synchronize catch up with the leader.
- Caution: Immediate eviction may lead to stale reads encountering locks and missing the cache.

We would keep as much useful info as possible in the cache, and never evict any of them because of space issue. One entry only contains information like start_ts + min_commit_ts + status + TTL, which should make the cache small enough, considering our assumption of the number of ongoing large transactions.
#### TiKV resolved-ts Resolver

There should be a large defaut TTL of these entries, because we want to save unnecessary efforts when some reader meets a lock belonging to these transactions.
Operational Mechanism:

After the successfully commiting all secondary locks of a large transaction, the coordinator explicitly broadcasts a TTL update to all TiKV nodes, extending it to several seconds later. Don't immediately evict the entry to give the follower peers some time to catch up with leader, otherwise a stale read may encounter a lock and miss the cache.
1. Standard lock handling:
- Tracks normal locks using conventional methods.

#### TiKV resolved-ts resolver
2. Large pipelined transaction Locks:
- Identified by the "generation" field.
- Tracks only the start_ts of locks.

Resolver tracks normal locks as usual, but handles locks belonging to large pipelined transactions in a different way. The locks can be identified via the "generation" field.
Resolved-ts calculation:

For locks in large pipelined transactions, the resolver only tracks the start_ts. When calculating resolved-ts, it first attempts to map start_ts to min_commit_ts via the txn_status_cache. To maintain semantics, resolved-ts must be at least min_commit_ts + 1. If the cache lookup fails, it falls back to using start_ts for calculation.
- Primary Method:
- Attempts to map start_ts to min_commit_ts via txn_status_cache.
- Sets resolved-ts to max(min_commit_ts + 1, current_resolved_ts).
- Fallback Method:
- Uses start_ts for calculation if cache lookup fails.

Upon observing a LOCK DELETION, the resolver ceases tracking the corresponding start_ts for large pipelined transactions. This is justified as lock deletion only occurs once a transaction's final state is determined.


We preseve the resolved-ts semantics by ensuring that resolved-ts is always greater than or equal to min_commit_ts + 1.

When the resolver observes a LOCK DELETION event, it immediately ceases tracking the corresponding start_ts for large pipelined transactions. This action is justified because lock deletion is a clear indicator that a transaction's final state has been determined. By stopping the tracking at this point, the resolver efficiently manages its resources and maintains an up-to-date view of active transactions.

### Upgrading TiKV

This design constitutes a non-intrusive modification, eliminating specific concerns during the upgrade process. In case of a cache miss, the system automatically falls back to the original approach, ensuring seamless backward compatibility.

### Benefits in resolving locks

Across all lock resolution scenarios—including normal reads, stale reads, flashbacks, and potentially write conflicts—a preliminary txn_status_cache lookup can significantly reduce unnecessary computational overhead introduced by large transactions.

### Compatibility

The key difference is that services can now observe much more locks.
The key difference is that services can now observe more locks.

Note that the current implementation still allows encountering locks with timestamps smaller than the resolved timestamp. This proposal doesn't change this behavior, so we don't anticipate correctness issues with this change. The main challenges will be related to performance and availability.
It's important to note that the current implementation still permits the encounter of locks with timestamps smaller than resolved-ts. This proposal maintains this existing behavior, thus we do not anticipate any correctness issues arising from this modification. The principal challenges we foresee are mainly performance and availability concerns.

#### Stale read

Expand All @@ -101,9 +140,7 @@ When it meets a lock, first query the txn_status_cache. When not found in the ca

#### EBS snapshot backups

*TBD*

It depends on Flashback.
Its only dependency on resolved-ts is to use Flashback.

#### CDC

Expand All @@ -113,26 +150,26 @@ Already well documented in [Large Transactions Don't Block Watermark](https://gi

Memory: each cache entry takes at least 8(start_ts) + 8(min_commit_ts) + 1(status) + 8(TTL) = 33 bytes. Any TiKV instance can easily hold millions of such entries.

Latency: maintenance of resolved-ts requires extra work, but they can be asynchoronous, thus not affecting latency.
Latency: The additional operations required for resolved-ts maintenance can be executed asynchronously, thereby mitigating any potential impact on system latency.

RPCs: each large transaction sends N more RPCs per second, where N is the number of TiKVs.
RPCs: each large transaction sends N more RPCs per second, where N is the number of TiKVs. Batching can greatly reduce the RPC overhead.

CPU: the mechanism may consume more CPU, but should be ignorable.



## Possible future improvements

#### Tolerate lagging non-pipelined transactions
### Tolerate lagging non-pipelined transactions

To get closer to our ultimate goal: minimize blocking of resolved-ts, we can further consider the case where resolved-ts being blocked by normal transaction locks. Typical causes could be:

- Memory locks from async commit and 1PC. Normal locks are region-partitioned can will not block resolved-ts of other regions. But concurrenty manager is a node-level instance. Memory locks can block every (leader) region in the same TiKV.
- Slow transactions which take too much time committing their locks
- Long-running transactions that may not be large.
- Node failures

- Situation-1: Memory locks from async commit and 1PC. Normal locks are region-partitioned can will not block resolved-ts of other regions. But concurrenty manager is a node-level instance. Memory locks can block every (leader) region in the same TiKV.
- Situation-2: Slow transactions which take too much time committing their locks
- Situatino-3: Long-running transactions that may not be large.
- Situation-4: Node failures, network jitters, etc.

#### Approach-1: resolver pushing min_commit_ts

Resolved-ts must continuously progress. However, it can't advance autonomously while ignoring locks. Such advancement would require the commit PK operation to either complete before the resolved-ts reaches a certain point or fail. This guarantee is not feasible.

Expand Down Expand Up @@ -163,6 +200,6 @@ Locks belonging to the same transaction can be consolidated.

To mitigate uncontrollable overhead and metastability risks, we limit our check to K transactions per region with the lowest min_commit_ts values. This approach is necessary given the potentially substantial total number of transactions.

#### Reduce write-read conflicts
#### Approach-2: long-running transactions setting min_commit_ts

Read requests typically require a check_txn_status to advance the min_commit_ts. We propose allowing large transactions to set their min_commit_ts to a higher value, potentially exceeding the current TSO. These min_commit_ts values, stored in the txn_status_cache, would enable read requests encountering locks to bypass them via a cache lookup. Large transactions would cease this special min_commit_ts setting once ready for prewrite.
If a transaction already runs for a long time, it must get a latest TSO as its min_commit_ts before starts prewriting, if it's not using async commit or 1PC. This prevents the short-lived locks blocking resolved-ts, whether they are memory locks or not.

0 comments on commit 813b856

Please sign in to comment.