Skip to content

Commit

Permalink
Merge pull request #320 from Morgan279/update/v6.1
Browse files Browse the repository at this point in the history
Update/v6.1
  • Loading branch information
sunxiaoguang authored Jul 7, 2022
2 parents 3795d10 + 3ea9dac commit 5ae34ff
Show file tree
Hide file tree
Showing 75 changed files with 9,329 additions and 15 deletions.
3 changes: 2 additions & 1 deletion config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,10 @@ params:
favicon: "favicon.png"
googleAnalyticsId: "UA-130734531-1"
versions:
latest: "5.1"
latest: "6.1"
all:
- "dev"
- "6.1"
- "5.1"
- "4.0"
- "3.0"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ To understand the replication in TiKV, it is important to review several concept
1. Open the Grafana at [http://localhost:3000](http://localhost:3000) (printed from the `tiup-playground` command), and then log in to Grafana using username `admin` and password `admin`.
2. On the **playground-overview** dashboard, check the matrices on the **Region** panel in the **TiKV** tab. You can see that the numbers of Regions on all three nodes are the same, which indicates the following:
2. On the **playground-overview** dashboard, check the metrics on the **Region** panel in the **TiKV** tab. You can see that the numbers of Regions on all three nodes are the same, which indicates the following:
* There is only one Region. It contains the data imported by `go-ycsb`.
* Each Region has 3 replicas (according to the default configuration).
Expand Down Expand Up @@ -173,4 +173,4 @@ If you do not need the local TiKV cluster anymore, you can stop and delete it.

```sh
tiup clean --all
```
```
4 changes: 2 additions & 2 deletions content/docs/5.1/deploy/configure/topology.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Topology Lable Config
title: Topology Label Config
description: Learn how to configure topology labels.
menu:
"5.1":
Expand Down Expand Up @@ -94,4 +94,4 @@ $ pd-ctl
Now, PD schedules replicas of the same `Region` to different data zones.

- Even if one data zone goes down, the TiKV cluster is still highly available.
- If the data zone cannot recover within a period of time, PD removes the replica from this data zone.
- If the data zone cannot recover within a period of time, PD removes the replica from this data zone.
2 changes: 1 addition & 1 deletion content/docs/5.1/deploy/monitor/grafana.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
---
title: Export Grafana Shapshots
title: Export Grafana Snapshots
description: Learn how to export snapshots of Grafana Dashboard, and how to visualize these files.
menu:
"5.1":
Expand Down
103 changes: 103 additions & 0 deletions content/docs/6.1/concepts/explore-tikv-features/cas.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
title: CAS on RawKV
description: Compare And Swap
menu:
"6.1":
parent: Features-v6.1
weight: 4
identifier: CAS on RawKV-v6.1
---

This page walks you through a simple demonstration of performing compare-and-swap (CAS) in TiKV.

In RawKV, compare-and-swap (CAS) is an atomic instruction to achieve synchronization between multiple threads.

Performing CAS is an atomic equivalent of executing the following code:

```
prevValue = get(key);
if (prevValue == request.prevValue) {
put(key, request.value);
}
return prevValue;
```

The atomicity guarantees that the new value is calculated based on the up-to-date information. If the value is updated by another thread at the same time, the write would fail.

## Prerequisites

Make sure that you have installed TiUP, jshell, downloaded tikv-client JAR files, and started a TiKV cluster according to [TiKV in 5 Minutes](../../tikv-in-5-minutes).

## Verify CAS

To verify whether CAS works, you can take the following steps.

### Step 1: Write the code to test CAS

Save the following script to the `test_raw_cas.java` file.

```java
import java.util.Optional;
import org.tikv.common.TiConfiguration;
import org.tikv.common.TiSession;
import org.tikv.raw.RawKVClient;
import org.tikv.shade.com.google.protobuf.ByteString;

TiConfiguration conf = TiConfiguration.createRawDefault("127.0.0.1:2379");
// enable AtomicForCAS when using RawKVClient.compareAndSet or RawKVClient.putIfAbsent
conf.setEnableAtomicForCAS(true);
TiSession session = TiSession.create(conf);
RawKVClient client = session.createRawClient();

ByteString key = ByteString.copyFromUtf8("Hello");
ByteString value = ByteString.copyFromUtf8("CAS");
ByteString newValue = ByteString.copyFromUtf8("NewValue");

// put
client.put(key, value);
System.out.println("put key=" + key.toStringUtf8() + " value=" + value.toStringUtf8());

// get
Optional<ByteString> result = client.get(key);
assert(result.isPresent());
assert("CAS".equals(result.get().toStringUtf8()));
System.out.println("get key=" + key.toStringUtf8() + " result=" + result.get().toStringUtf8());

// cas
client.compareAndSet(key, Optional.of(value), newValue);
System.out.println("cas key=" + key.toStringUtf8() + " value=" + value.toStringUtf8() + " newValue=" + newValue.toStringUtf8());

// get
result = client.get(key);
assert(result.isPresent());
assert("NewValue".equals(result.get().toStringUtf8()));
System.out.println("get key=" + key.toStringUtf8() + " result=" + result.get().toStringUtf8());

// close
client.close();
session.close();
```

### Step 2: Run the code

```bash
jshell --class-path tikv-client-java.jar:slf4j-api.jar --startup test_raw_cas.java
```

The example output is as follows:

```bash
put key=Hello value=CAS
get key=Hello result=CAS
cas key=Hello value=CAS newValue=NewValue
get key=Hello result=NewValue
```

As in the example output, after calling `compareAndSet`, the value `CAS` is replaced by `newValue`.

{{< warning >}}

- To ensure the linearizability of `CAS` when it is used together with `put`, `delete`, `batch_put`, or `batch_delete`, you must set `conf.setEnableAtomicForCAS(true)`.

- To guarantee the atomicity of CAS, write operations such as `put` or `delete` in atomic mode take more resources.
{{< /warning >}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,173 @@
---
title: Distributed Transaction
description: How transaction works on TxnKV
menu:
"6.1":
parent: Features-v6.1
weight: 5
identifier: Distributed Transaction-v6.1
---

This chapter walks you through a simple demonstration of how TiKV's distributed transaction works.

## Prerequisites

Before you start, ensure that you have set up a TiKV cluster and installed the `tikv-client` Python package according to [TiKV in 5 Minutes](../../tikv-in-5-minutes).

{{< warning >}}
TiKV Java client's Transaction API has not been released yet, so the Python client is used in this example.
{{< /warning >}}

## Test snapshot isolation

Transaction isolation is one of the foundations of database transaction processing. Isolation is one of the four key properties of a transaction (commonly referred as ACID).

TiKV implements [Snapshot Isolation (SI)](https://en.wikipedia.org/wiki/Snapshot_isolation) consistency, which means that:

- all reads made in a transaction will see a consistent snapshot of the database (in practice, TiKV Client reads the last committed values that exist since TiKV Client has started);
- the transaction will successfully commit only if the updates that a transaction has made do not conflict with the concurrent updates made by other transactions since that snapshot.

The following example shows how to test TiKV's snapshot isolation.

Save the following script to file `test_snapshot_isolation.py`.

```python
from tikv_client import TransactionClient

client = TransactionClient.connect("127.0.0.1:2379")

# clean
txn1 = client.begin()
txn1.delete(b"k1")
txn1.delete(b"k2")
txn1.commit()

# put k1 & k2 without commit
txn2 = client.begin()
txn2.put(b"k1", b"Snapshot")
txn2.put(b"k2", b"Isolation")

# get k1 & k2 returns nothing
# cannot read the data before transaction commit
snapshot1 = client.snapshot(client.current_timestamp())
print(snapshot1.batch_get([b"k1", b"k2"]))

# commit txn2
txn2.commit()

# get k1 & k2 returns nothing
# still cannot read the data after transaction commit
# because snapshot1's timestamp < txn2's commit timestamp
# snapshot1 can see a consistent snapshot of the database
print(snapshot1.batch_get([b"k1", b"k2"]))

# can read the data finally
# because snapshot2's timestamp > txn2's commit timestamp
snapshot2 = client.snapshot(client.current_timestamp())
print(snapshot2.batch_get([b"k1", b"k2"]))
```

Run the test script

```bash
python3 test_snapshot_isolation.py

[]
[]
[(b'k1', b'Snapshot'), (b'k2', b'Isolation')]
```

From the above example, you can find that `snapshot1` cannot read the data before and after `txn2` is commited. This indicates that `snapshot1` can see a consistent snapshot of the database.

## Try optimistic transaction model

TiKV supports distributed transactions using either pessimistic or optimistic transaction models.

TiKV uses the optimistic transaction model by default. With optimistic transactions, conflicting changes are detected as part of a transaction commit. This helps improve the performance when concurrent transactions infrequently modify the same rows, because the process of acquiring row locks can be skipped.

The following example shows how to test TiKV with optimistic transaction model.

Save the following script to file `test_optimistic.py`.

```python
from tikv_client import TransactionClient

client = TransactionClient.connect("127.0.0.1:2379")

# clean
txn1 = client.begin(pessimistic=False)
txn1.delete(b"k1")
txn1.delete(b"k2")
txn1.commit()

# create txn2 and put k1 & k2
txn2 = client.begin(pessimistic=False)
txn2.put(b"k1", b"Optimistic")
txn2.put(b"k2", b"Mode")

# create txn3 and put k1
txn3 = client.begin(pessimistic=False)
txn3.put(b"k1", b"Optimistic")

# txn2 commit successfully
txn2.commit()

# txn3 commit failed because of conflict
# with optimistic transactions conflicting changes are detected when the transaction commits
txn3.commit()
```

Run the test script

```bash
python3 test_optimistic.py

Exception: KeyError WriteConflict
```

From the above example, you can find that with optimistic transactions, conflicting changes are detected when the transaction commits.

## Try pessimistic transaction model

In the optimistic transaction model, transactions might fail to be committed because of write–write conflict in heavy contention scenarios. In the case that concurrent transactions frequently modify the same rows (a conflict), pessimistic transactions might perform better than optimistic transactions.

The following example shows how to test TiKV with pessimistic transaction model.

Save the following script to file `test_pessimistic.py`.

```python
from tikv_client import TransactionClient

client = TransactionClient.connect("127.0.0.1:2379")

# clean
txn1 = client.begin(pessimistic=True)
txn1.delete(b"k1")
txn1.delete(b"k2")
txn1.commit()

# create txn2
txn2 = client.begin(pessimistic=True)

# put k1 & k2
txn2.put(b"k1", b"Pessimistic")
txn2.put(b"k2", b"Mode")

# create txn3
txn3 = client.begin(pessimistic=True)

# put k1
# txn3 put data failed because of conflict
# with pessimistic transactions conflicting changes are detected when writing data
txn3.put(b"k1", b"Pessimistic")
```

Run the test script

```bash
python3 test_pessimistic.py

Exception: KeyError
```

From the above example, you can find that with pessimistic transactions, conflicting changes are detected at the moment of data writing.
Loading

0 comments on commit 5ae34ff

Please sign in to comment.