Skip to content

Commit

Permalink
Reorganize Get Started (pingcap#362)
Browse files Browse the repository at this point in the history
* First commit of cleaned-up Get Started section

* Fixed formatting

* Fixes to Get Started and GKE tutorial

* Fixes to GKE tutorial

* Fixes to GKE tutorial

* Fixes to Get Started

* Added Grafana information and fixed some other Get Started items

* Fix TOC

* Update en/deploy-tidb-from-kubernetes-gke.md

Co-authored-by: DanielZhangQD <[email protected]>

* Revert "Update en/deploy-tidb-from-kubernetes-gke.md"

I accidentally applied this commit using the web interface.

This reverts commit 5bc0729.

* Update en/get-started.md

Co-authored-by: DanielZhangQD <[email protected]>

* Update en/get-started.md

Co-authored-by: DanielZhangQD <[email protected]>

* Change order of ops for tidb-operator install. Change wording and org of GKE tutorial.

* Fixed broken links

* Fixed markdown lint complaints

* Added an Upgrade section

* Added note about MySQL 8.0 client default-auth plugin.

* Fix md lint

* Fix md formatting

* Added note to kill kubectl port-forwarding

Co-authored-by: DanielZhangQD <[email protected]>
  • Loading branch information
Kolbe Kegel and DanielZhangQD authored Jun 10, 2020
1 parent f508693 commit 203bea4
Show file tree
Hide file tree
Showing 8 changed files with 897 additions and 598 deletions.
5 changes: 1 addition & 4 deletions en/TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,7 @@
+ Introduction
- [Overview](tidb-operator-overview.md)
- [TiDB Operator v1.1 Notice](notes-tidb-operator-v1.1.md)
+ Get Started
- [kind](deploy-tidb-from-kubernetes-kind.md)
- [GKE](deploy-tidb-from-kubernetes-gke.md)
- [Minikube](deploy-tidb-from-kubernetes-minikube.md)
- [Get Started](get-started.md)
+ Deploy
- Deploy TiDB Cluster
- [On AWS EKS](deploy-on-aws-eks.md)
Expand Down
4 changes: 1 addition & 3 deletions en/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,9 +19,7 @@ TiDB Operator provides several ways to deploy TiDB clusters in Kubernetes:

+ For test environment:

- [kind](deploy-tidb-from-kubernetes-kind.md): Deploy TiDB clusters in local Kubernetes using kind
- [Minikube](deploy-tidb-from-kubernetes-minikube.md): Deploy TiDB clusters in a local Minikube environment using TiDB Operator
- [GKE](deploy-tidb-from-kubernetes-gke.md): Deploy TiDB clusters on GKE using TiDB Operator
- [Get Started](get-started.md) using kind, Minikube, or the Google Cloud Shell

+ For production environment:

Expand Down
118 changes: 20 additions & 98 deletions en/deploy-tidb-from-kubernetes-gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@ category: how-to

# Deploy TiDB on Google Cloud

This tutorial is designed to be directly [run in Google Cloud Shell](https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/pingcap/docs&tutorial=deploy-tidb-from-kubernetes-gke.md).
This tutorial is designed to be directly [run in Google Cloud Shell](https://console.cloud.google.com/cloudshell/open?cloudshell_git_repo=https://github.com/pingcap/docs-tidb-operator&cloudshell_tutorial=en/deploy-tidb-from-kubernetes-gke.md).

<a href="https://console.cloud.google.com/cloudshell/open?git_repo=https://github.com/pingcap/docs&tutorial=deploy-tidb-from-kubernetes-gke.md"><img src="https://gstatic.com/cloudssh/images/open-btn.png"/></a>
<a href="https://console.cloud.google.com/cloudshell/open?cloudshell_git_repo=https://github.com/pingcap/docs-tidb-operator&cloudshell_tutorial=en/deploy-tidb-from-kubernetes-gke.md"><img src="https://gstatic.com/cloudssh/images/open-btn.png"/></a>

It takes you through the following steps:

Expand Down Expand Up @@ -44,14 +44,10 @@ This tutorial requires use of the Compute and Container APIs. Please enable them

This step defaults gcloud to your preferred project and [zone](https://cloud.google.com/compute/docs/regions-zones/), which simplifies the commands used for the rest of this tutorial:

{{< copyable "shell-regular" >}}

```shell
gcloud config set project {{project-id}}
```

{{< copyable "shell-regular" >}}

```shell
gcloud config set compute/zone us-west1-a
```
Expand All @@ -62,105 +58,60 @@ It's now time to launch a 3-node kubernetes cluster! The following command launc

It takes a few minutes to complete:

{{< copyable "shell-regular" >}}

```shell
gcloud container clusters create tidb
```

Once the cluster has launched, set it to be the default:

{{< copyable "shell-regular" >}}

```shell
gcloud config set container/cluster tidb
```

The last step is to verify that `kubectl` can connect to the cluster, and all three machines are running:

{{< copyable "shell-regular" >}}

```shell
kubectl get nodes
```

If you see `Ready` for all nodes, congratulations! You've setup your first Kubernetes cluster.
If you see `Ready` for all nodes, congratulations! You've set up your first Kubernetes cluster.

## Install Helm

[Helm](https://helm.sh/) is a package management tool for Kubernetes. Make sure your Helm version >= 2.11.0 && < 3.0.0 && != [2.16.4](https://github.com/helm/helm/issues/7797). The installation steps are as follows:

1. Refer to [Helm official documentation](https://v2.helm.sh/docs/using_helm/#installing-helm) to install the Helm client.

2. Install the Helm server.

Apply the `RBAC` rule required by the `tiller` component in the cluster and install `tiller`:

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/tiller-rbac.yaml && \
helm init --service-account=tiller --upgrade
```
[Helm](https://helm.sh/) is a package management tool for Kubernetes.

To confirm that the `tiller` Pod is in the `running` state, run the following command:
1. Install the Helm client:

```shell
kubectl get po -n kube-system -l name=tiller
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash
```

3. Add the repository:

{{< copyable "shell-regular" >}}
2. Add the PingCAP repository:

```shell
helm repo add pingcap https://charts.pingcap.org/
```

Use `helm search` to search the chart provided by PingCAP:

{{< copyable "shell-regular" >}}

```shell
helm search pingcap -l
```

## Deploy TiDB Operator

TiDB Operator uses [CRD (Custom Resource Definition)](https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` CRD.

{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml && \
kubectl get crd tidbclusters.pingcap.com
```

After `TidbCluster` CRD is created, install TiDB Operator in your Kubernetes cluster.

1. Get the `values.yaml` file of the `tidb-operator` chart you want to install:
After the `TidbCluster` CRD is created, install TiDB Operator in your Kubernetes cluster.

{{< copyable "shell-regular" >}}
1. Install TiDB Operator:

```shell
mkdir -p /home/tidb/tidb-operator && \
helm inspect values pingcap/tidb-operator --version=v1.1.0-rc.1 > /home/tidb/tidb-operator/values-tidb-operator.yaml
```

Modify the configuration in `values.yaml` according to your needs.

2. Install TiDB Operator:

{{< copyable "shell-regular" >}}

```shell
helm install pingcap/tidb-operator --name=tidb-operator --namespace=tidb-admin --version=v1.1.0-rc.1 -f /home/tidb/tidb-operator/values-tidb-operator.yaml && \
kubectl create namespace tidb-admin
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.0
kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator
```

3. Create the `pd-ssd` StorageClass:

{{< copyable "shell-regular" >}}
2. Create the `pd-ssd` StorageClass:

``` shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/gke/persistent-disk.yaml
Expand All @@ -172,32 +123,24 @@ To deploy the TiDB cluster, perform the following steps:

1. Create `Namespace`:

{{< copyable "shell-regular" >}}

```shell
kubectl create namespace demo
```

2. Deploy the TiDB cluster:

{{< copyable "shell-regular" >}}

``` shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-cluster.yaml -n demo
```

3. Deploy the TiDB cluster monitor:

{{< copyable "shell-regular" >}}

``` shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/examples/basic/tidb-monitor.yaml -n demo
```

4. View the Pod status:

{{< copyable "shell-regular" >}}

``` shell
kubectl get po -n demo
```
Expand All @@ -206,8 +149,6 @@ To deploy the TiDB cluster, perform the following steps:

There can be a small delay between the pod being up and running, and the service being available. You can view the service status using the following command:

{{< copyable "shell-regular" >}}

```shell
kubectl get svc -n demo --watch
```
Expand All @@ -216,33 +157,25 @@ When you see `basic-tidb` appear, the service is ready to access. You can use <k

To connect to TiDB within the Kubernetes cluster, you can establish a tunnel between the TiDB service and your Cloud Shell. This is recommended only for debugging purposes, because the tunnel will not automatically be transferred if your Cloud Shell restarts. To establish a tunnel:

{{< copyable "shell-regular" >}}

```shell
kubectl -n demo port-forward svc/basic-tidb 4000:4000 &>/tmp/port-forward.log &
kubectl -n demo port-forward svc/basic-tidb 4000:4000 &>/tmp/pf4000.log &
```

From your Cloud Shell:

{{< copyable "shell-regular" >}}

```shell
sudo apt-get install -y mysql-client && \
mysql -h 127.0.0.1 -u root -P 4000
```

Try out a MySQL command inside your MySQL terminal:

{{< copyable "sql" >}}

```sql
select tidb_version();
```
If you did not specify a password during installation, set one now:
{{< copyable "sql" >}}
```sql
SET PASSWORD FOR 'root'@'%' = '<change-to-your-password>';
```
Expand All @@ -257,64 +190,53 @@ Congratulations, you are now up and running with a distributed TiDB database com
To scale out the TiDB cluster, modify `spec.pd.replicas`, `spec.tidb.replicas`, and `spec.tikv.replicas` in the `TidbCluster` object of the cluster to your desired value using kubectl.
{{< copyable "shell-regular" >}}
``` shell
kubectl -n demo edit tc basic
```
## Access the Grafana dashboard
To access the Grafana dashboards, you can create a tunnel between the Grafana service and your shell.
To do so, use the following command:
To access the Grafana dashboards, you can forward a port from the Cloud Shell to the Grafana service in Kubernetes. (Cloud Shell already uses port 3000 so we use port 3003 in this example instead.)
{{< copyable "shell-regular" >}}
To do so, use the following command:
```shell
kubectl -n demo port-forward svc/basic-grafana 3000:3000 &>/dev/null &
kubectl -n demo port-forward svc/basic-grafana 3003:3000 &>/tmp/pf3003.log &
```
In Cloud Shell, click on the Web Preview button and enter 3000 for the port. This opens a new browser tab pointing to the Grafana dashboards. Alternatively, use the following URL <https://ssh.cloud.google.com/devshell/proxy?port=3000> in a new tab or window.
Open this URL to view the Grafana dashboard: <https://ssh.cloud.google.com/devshell/proxy?port=3003> . (Alternatively, in Cloud Shell, click on the Web Preview button and enter 3003 for the port. If not using Cloud Shell, point a browser to `localhost:3000`.
If not using Cloud Shell, point a browser to `localhost:3000`.
The default username and password are both "admin".
## Destroy the TiDB cluster
To destroy a TiDB cluster in Kubernetes, run the following command:
{{< copyable "shell-regular" >}}
```shell
kubectl delete tc basic -n demo
```
To destroy the monitoring component, run the following command:
{{< copyable "shell-regular" >}}
```shell
kubectl delete tidbmonitor basic -n demo
```
The above commands only delete the running pods, the data is persistent. If you do not need the data anymore, you should run the following commands to clean the data and the dynamically created persistent disks:
{{< copyable "shell-regular" >}}
```shell
kubectl delete pvc -n demo -l app.kubernetes.io/instance=basic,app.kubernetes.io/managed-by=tidb-operator && \
kubectl get pv -l app.kubernetes.io/namespace=demo,app.kubernetes.io/managed-by=tidb-operator,app.kubernetes.io/instance=basic -o name | xargs -I {} kubectl patch {} -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}'
```
## Shut down the Kubernetes cluster
Once you have finished experimenting, you can delete the Kubernetes cluster with:
{{< copyable "shell-regular" >}}
Once you have finished experimenting, you can delete the Kubernetes cluster:
```shell
gcloud container clusters delete tidb
```
## More Information
A simple [deployment based on Terraform] is also provided.
To learn more about creating a deployment on GKE suitable for production use, please consult <https://pingcap.com/docs/tidb-in-kubernetes/stable/deploy-on-gcp-gke/>.
Loading

0 comments on commit 203bea4

Please sign in to comment.