diff --git a/archived/appengine-effective-polymodel.md b/archived/appengine-effective-polymodel.md index ff60b8589d..97d8f4d1c9 100644 --- a/archived/appengine-effective-polymodel.md +++ b/archived/appengine-effective-polymodel.md @@ -311,7 +311,7 @@ maintain their specific structure. It might help to understand a little bit about how this polymorphism is implemented. All sub-classes of a given class hierarchy root share the same Google Cloud Datastore kind. To differentiate between classes within the -hiearchy, the PolyModel has an extra hidden string list property, class, in the +hierarchy, the PolyModel has an extra hidden string list property, class, in the Cloud Datastore. This list, known as the class key, describes that particular object's location in the class hierarchy. Each element of this list is the name of a class, starting with the root of the hierarchy at index 0. Because queries @@ -348,7 +348,7 @@ It might be tempting to make every single class in an application a PolyModel class, even for classes that do not immediately require a subclass. However it should not normally be required to create a PolyModel class earlier so that it might be subclassed in the future. If the application sticks to using the class -method version of gql and all it is future compatible to change the inheritence +method version of gql and all it is future compatible to change the inheritance from Model to PolyModel later. This is because calls to gql and all on the class hierarchy root class do not attempt to query against class property. diff --git a/archived/appengine-memcache-best-practices/index.md b/archived/appengine-memcache-best-practices/index.md index 4b959e4776..c947365bba 100644 --- a/archived/appengine-memcache-best-practices/index.md +++ b/archived/appengine-memcache-best-practices/index.md @@ -233,7 +233,7 @@ programming languages. You can share the data in your memcache between any of your app's modules and versions. Because the memcache API serializes its parameters, and the API may be implemented differently in different languages, you need to code memcache keys and values carefully if you intend to share them -between langauges. +between languages. ### Key Compatibility diff --git a/archived/appengine-pusher/index.md b/archived/appengine-pusher/index.md index 6e5a9d2f72..990eff112d 100644 --- a/archived/appengine-pusher/index.md +++ b/archived/appengine-pusher/index.md @@ -90,7 +90,7 @@ Custom event handlers can be attached to a given event type. This allows for efficient event routing in the clients. **Note**: A subscriber will receive all messages published over a channel. -Events may be trigged by the user or Pusher. +Events may be triggered by the user or Pusher. In case of Pusher-triggered events on a channel, the event name is prefixed with `pusher:`, such as `pusher:subscription-succeeded`. diff --git a/archived/cloud-iot-fledge/index.md b/archived/cloud-iot-fledge/index.md index 0054947bf4..edb59d75ce 100644 --- a/archived/cloud-iot-fledge/index.md +++ b/archived/cloud-iot-fledge/index.md @@ -261,7 +261,7 @@ used to authenticate the device. ## Verify communication -1. Retun to the Fledge GUI dashboard. +1. Return to the Fledge GUI dashboard. The count of readings sent and received readings should be increasing. diff --git a/archived/cloud-iot-mender-ota/index.md b/archived/cloud-iot-mender-ota/index.md index fafc01acf1..efa868199e 100644 --- a/archived/cloud-iot-mender-ota/index.md +++ b/archived/cloud-iot-mender-ota/index.md @@ -234,14 +234,14 @@ Using the Cloud Shell environment, you will configure IoT Core audit logs to rou 1. Create a log export for IoT Core device creation events to Pub/Sub: - gcloud beta logging sinks create device-lifecyle \ + gcloud beta logging sinks create device-lifecycle \ pubsub.googleapis.com/projects/$PROJECT/topics/registration-events \ --log-filter='resource.type="cloudiot_device" protoPayload.methodName="google.cloud.iot.v1.DeviceManager.CreateDevice"' 1. Give the log exporter system-account permission to publish to your topic: gcloud beta pubsub topics add-iam-policy-binding registration-events \ - --member $(gcloud beta logging sinks describe device-lifecyle --format='value(writerIdentity)') \ + --member $(gcloud beta logging sinks describe device-lifecycle --format='value(writerIdentity)') \ --role roles/pubsub.publisher ### Deploy Firebase Functions to call Mender Preauthorization API diff --git a/archived/cloud-run-eventing-kafka/kafka-cr-eventing/apps/currency/requirements.txt b/archived/cloud-run-eventing-kafka/kafka-cr-eventing/apps/currency/requirements.txt index 5f8b0bba38..46952d2892 100644 --- a/archived/cloud-run-eventing-kafka/kafka-cr-eventing/apps/currency/requirements.txt +++ b/archived/cloud-run-eventing-kafka/kafka-cr-eventing/apps/currency/requirements.txt @@ -1,6 +1,6 @@ pylint==2.4.0 google-cloud==0.34.0 -Flask==1.1.1 +Flask==2.3.2 kafka-python==1.4.6 pykafka==2.8.0 confluent-kafka==1.1.0 diff --git a/tutorials/cloudbuild-test-runner.md b/archived/cloudbuild-test-runner.md similarity index 100% rename from tutorials/cloudbuild-test-runner.md rename to archived/cloudbuild-test-runner.md diff --git a/archived/compute-managed-instance-groups-dashboard/index.md b/archived/compute-managed-instance-groups-dashboard/index.md index ba1d847316..1dd61a4528 100644 --- a/archived/compute-managed-instance-groups-dashboard/index.md +++ b/archived/compute-managed-instance-groups-dashboard/index.md @@ -101,7 +101,7 @@ the one you want to monitor. ### Initialization and authentication using gapi -Once the user opens the page, Angular's `ng-init` embeded in the +Once the user opens the page, Angular's `ng-init` embedded in the [`body` element of *index.html*][index] runs our `initialize()` function from [*main-controller.js*][main-controller]. diff --git a/archived/data-science-exploration/index.md b/archived/data-science-exploration/index.md index b6ca391274..ea8be9fc41 100644 --- a/archived/data-science-exploration/index.md +++ b/archived/data-science-exploration/index.md @@ -257,7 +257,7 @@ Quantiles are useful for getting a quick feel for the distribution of your data Finally, one might be concerned, when researching a new place to live or establish a business, the rate you might expect meteors to land in your area. Fortunately, BigQuery provides some functions to help compute distances between -latitude and logitude coordinates. Adapted from the +latitude and longitude coordinates. Adapted from the [advanced examples](/bigquery/docs/reference/legacy-sql#math-adv-examples) in the docs, we can find the number of meteors within an approximately 50-mile radius of Google's Kirkland campus (at 47.669861, -122.197355): diff --git a/archived/data-science-extraction/index.md b/archived/data-science-extraction/index.md index 38644195e1..62645fa75b 100644 --- a/archived/data-science-extraction/index.md +++ b/archived/data-science-extraction/index.md @@ -190,7 +190,7 @@ Because the audio we're transcribing is longer than a minute in length, we must first upload the raw audio files to [Cloud Storage][storage], so the Speech API can access it asynchronously. We could use the [gsutil][gsutil] tool to do this manually, or we could -do it programatically from our code. Because we'd like to eventually +do it programmatically from our code. Because we'd like to eventually [automate this process in a pipeline](/community/tutorials/data-science-preprocessing/), we'll do this in code: diff --git a/archived/deploy-xenforo-to-compute-engine/index.md b/archived/deploy-xenforo-to-compute-engine/index.md index af223e3054..b2978a7ad0 100644 --- a/archived/deploy-xenforo-to-compute-engine/index.md +++ b/archived/deploy-xenforo-to-compute-engine/index.md @@ -258,7 +258,7 @@ tutorial: | Before | After | | --------------------------|---------------------------| - | default_transport = error | #default_transpot = error | + | default_transport = error | #default_transport = error | | relay_transport = error | #relay_transport = error | Edit the following lines: diff --git a/archived/google-kubernetes-engine-hyperparameter-search/index.md b/archived/google-kubernetes-engine-hyperparameter-search/index.md index 34c66f1868..4cbac067f9 100644 --- a/archived/google-kubernetes-engine-hyperparameter-search/index.md +++ b/archived/google-kubernetes-engine-hyperparameter-search/index.md @@ -44,7 +44,7 @@ your computer but sends the actual workload to a cluster on Google Kubernetes En the following: - You can continue to use your laptop/workstation for other work while waiting for the results. -- You can use more powerful machines to speed up the search, for instance mulitple nodes with 64 virtual CPU cores. +- You can use more powerful machines to speed up the search, for instance multiple nodes with 64 virtual CPU cores. To accomplish this, we will create a `SearchCV` object in the notebook, upload a pickled copy of this object to Cloud Storage. A job running on a cluster which we will create then retrieves that pickled object and calls its `fit` method and diff --git a/archived/install-bower-dependencies-on-google-app-engine.md b/archived/install-bower-dependencies-on-google-app-engine.md index bb0565f75b..825dcb8c78 100644 --- a/archived/install-bower-dependencies-on-google-app-engine.md +++ b/archived/install-bower-dependencies-on-google-app-engine.md @@ -35,7 +35,7 @@ This tutorial discusses three different methods. ## Easiest: Do nothing -When you deploy to App Engine flexible enviroment a Docker image is created for +When you deploy to App Engine flexible environment a Docker image is created for you and your code is copied into the image. This first method relies on the Docker image build step to make Bower dependencies available to your app. This method is the easiest. diff --git a/archived/java-dataflow-quickstart/index.md b/archived/java-dataflow-quickstart/index.md index 600e74e7af..025b149940 100644 --- a/archived/java-dataflow-quickstart/index.md +++ b/archived/java-dataflow-quickstart/index.md @@ -23,7 +23,7 @@ In this walkthrough you’ll do the following: * Set up Dataflow. * Enable the necessary Google Cloud APIs. -* Create a pipleine. +* Create a pipeline. * Publish the pipeline to Dataflow. [![Open walkthrough in the Cloud Console](https://storage.googleapis.com/gcp-community/tutorials/java-dataflow-quickstart/tutorial.png)](https://console.cloud.google.com/?walkthrough_id=dataflow__quickstart-beam__quickstart-beam-java) diff --git a/archived/kotlin-springboot-container-engine.md b/archived/kotlin-springboot-container-engine.md index 079eaba0b9..a707ccf01b 100644 --- a/archived/kotlin-springboot-container-engine.md +++ b/archived/kotlin-springboot-container-engine.md @@ -315,7 +315,7 @@ building a new image and pointing your deployment to it. kubectl set image deployment/demo demo=gcr.io/${PROJECT_ID}/demo:v1 **Note:** If a deployment gets stuck because an error in the image prevents -it from starting successfuly, you can recover by undoing the rollout. See the +it from starting successfully, you can recover by undoing the rollout. See the [Kubernetes deployment documentation](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) for more info. diff --git a/archived/run-symfony-on-appengine-standard/index.md b/archived/run-symfony-on-appengine-standard/index.md index 64843156a6..43a1b996b7 100644 --- a/archived/run-symfony-on-appengine-standard/index.md +++ b/archived/run-symfony-on-appengine-standard/index.md @@ -267,7 +267,7 @@ benefit from all of the features of a real email broadcasting system. composer require symfony/mailer -1. Specify which mail sytem to use: +1. Specify which mail system to use: composer require symfony/mailgun-mailer diff --git a/archived/schedule-dataflow-jobs-with-cloud-scheduler/scheduler-dataflow-demo/dataflow/pom.xml b/archived/schedule-dataflow-jobs-with-cloud-scheduler/scheduler-dataflow-demo/dataflow/pom.xml deleted file mode 100644 index 0fe8355acb..0000000000 --- a/archived/schedule-dataflow-jobs-with-cloud-scheduler/scheduler-dataflow-demo/dataflow/pom.xml +++ /dev/null @@ -1,127 +0,0 @@ - - - 4.0.0 - - dataflow - demo - 1.0-SNAPSHOT - - - - 2.15.0 - 3.7.0 - 3.0.2 - 1.7.25 - 2.8.8 - - - - - org.apache.maven.plugins - maven-compiler-plugin - ${maven-compiler-plugin.version} - - 1.8 - 1.8 - - - - org.apache.maven.plugins - maven-jar-plugin - ${maven-jar-plugin.version} - - - - true - lib/ - org.apache.streaming.WordCount - - - - - - - - - - - direct-runner - - true - - - - - org.apache.beam - beam-runners-direct-java - ${beam.version} - runtime - - - - - dataflow-runner - - - - org.apache.beam - beam-runners-google-cloud-dataflow-java - ${beam.version} - runtime - - - - - - - - - - org.apache.beam - beam-sdks-java-core - ${beam.version} - - - org.apache.beam - beam-runners-google-cloud-dataflow-java - ${beam.version} - - - com.google.guava - guava - 28.1-jre - - - - - org.apache.beam - beam-runners-direct-java - ${beam.version} - test - - - - - org.apache.beam - beam-sdks-java-io-google-cloud-platform - ${beam.version} - - - - - org.slf4j - slf4j-api - ${slf4j.version} - - - - org.slf4j - slf4j-jdk14 - ${slf4j.version} - - runtime - - - diff --git a/archived/singularity-containers-with-cloud-build/index.md b/archived/singularity-containers-with-cloud-build/index.md index 9b215cadc2..3c1f7b1274 100644 --- a/archived/singularity-containers-with-cloud-build/index.md +++ b/archived/singularity-containers-with-cloud-build/index.md @@ -13,7 +13,7 @@ Vanessa Sochat | Stanford

Contributed by Google employees.

This tutorial shows you how to use [Cloud Build](https://cloud.google.com/cloud-build/) to build [Singularity](https://www.sylabs.io/singularity/) containers. -In constrast to [Docker](https://www.docker.com/), the Singularity container binary is designed specifically for high performance computing (HPC) workloads. +In contrast to [Docker](https://www.docker.com/), the Singularity container binary is designed specifically for high performance computing (HPC) workloads. ## Before you begin diff --git a/archived/terraform-asm-in-cluster.md b/archived/terraform-asm-in-cluster.md index ee445f8914..ad9b97ebc6 100644 --- a/archived/terraform-asm-in-cluster.md +++ b/archived/terraform-asm-in-cluster.md @@ -434,7 +434,7 @@ For more information about metrics, logs, and tracing with Anthos Service Mesh, ### Terraform destroy -Use the `terraform destory` command to destroy all Terraform resources: +Use the `terraform destroy` command to destroy all Terraform resources: ${TERRAFORM_CMD} destroy -auto-approve diff --git a/archived/terraform-asm-upgrade.md b/archived/terraform-asm-upgrade.md index 8db32a08c4..d36e084c6b 100644 --- a/archived/terraform-asm-upgrade.md +++ b/archived/terraform-asm-upgrade.md @@ -527,7 +527,7 @@ you can't roll back. ### Terraform destroy -Use the `terraform destory` command to destroy all Terraform resources: +Use the `terraform destroy` command to destroy all Terraform resources: cd ${WORKDIR} terraform destroy -auto-approve diff --git a/archived/understanding-oauth2-and-deploy-a-basic-auth-srv-to-cloud-functions/index.md b/archived/understanding-oauth2-and-deploy-a-basic-auth-srv-to-cloud-functions/index.md index 3e26a75571..324abb9e9a 100644 --- a/archived/understanding-oauth2-and-deploy-a-basic-auth-srv-to-cloud-functions/index.md +++ b/archived/understanding-oauth2-and-deploy-a-basic-auth-srv-to-cloud-functions/index.md @@ -1,6 +1,6 @@ --- title: Understanding OAuth2 and deploying a basic authorization service to Cloud Functions -description: Learn how to deploy a basic OAuth2 authorization serivce to Cloud Functions. +description: Learn how to deploy a basic OAuth2 authorization service to Cloud Functions. author: michaelawyu tags: OAuth 2.0, Node.js, Cloud Functions, Cloud Datastore date_published: 2018-06-15 diff --git a/tutorials/bigquery-from-excel/index.md b/tutorials/bigquery-from-excel/index.md index e2cd32d08a..b285064de1 100644 --- a/tutorials/bigquery-from-excel/index.md +++ b/tutorials/bigquery-from-excel/index.md @@ -51,7 +51,7 @@ for details about on-demand and flat-rate pricing. BigQuery also offers 1. Check whether your version of Excel is [32-bit or 64-bit](https://www.digitalcitizen.life/3-ways-learn-whether-windows-program-64-bit-or-32-bit). -1. Download the latest version of thevODBC driver from the +1. Download the latest version of the ODBC driver from the [Simba Drivers for BigQuery page](https://cloud.google.com/bigquery/partners/simba-drivers/) that matches your version of Excel. 1. Run the ODBC driver installer. diff --git a/tutorials/cicd-cloud-run-github-actions/index.md b/tutorials/cicd-cloud-run-github-actions/index.md index 874a9ce11f..7c95119226 100644 --- a/tutorials/cicd-cloud-run-github-actions/index.md +++ b/tutorials/cicd-cloud-run-github-actions/index.md @@ -23,7 +23,7 @@ of the sample code and its Dockerfile. * Write a unit test for your code. * Create a Dockerfile. * Create a GitHub Action workflow file to deploy your code on Cloud Run. -* Make the code acessible for anyone. +* Make the code accessible for anyone. ## Costs diff --git a/tutorials/cloud-functions-avro-import-bq/index.js b/tutorials/cloud-functions-avro-import-bq/index.js index 5de1036ea8..6ad3a5b59f 100644 --- a/tutorials/cloud-functions-avro-import-bq/index.js +++ b/tutorials/cloud-functions-avro-import-bq/index.js @@ -16,7 +16,7 @@ exports.ToBigQuery_Stage = (event, callback) => { // Do not use the ftp_files Bucket to ensure that the bucket does not get crowded. // Change bucket to gas_ddr_files_staging // Set the table name (TableId) to the full file name including date, - // this will give each table a new distinct name and we can keep a record of all of the files recieved. + // this will give each table a new distinct name and we can keep a record of all of the files received. // This may not be the best way to do this... at some point we will need to archive and delete prior records. const dashOffset = filename.indexOf('-'); const tableId = filename.substring(0, dashOffset) + '_STAGE'; diff --git a/tutorials/cloud-functions-rate-limiting/index.md b/tutorials/cloud-functions-rate-limiting/index.md index 6cdad981d7..25cf542bdb 100644 --- a/tutorials/cloud-functions-rate-limiting/index.md +++ b/tutorials/cloud-functions-rate-limiting/index.md @@ -142,7 +142,7 @@ The `gcloud` command does the following (with each line below corresponding to a - triggered by HTTP requests, - from the Typescript transpiled JavaScript source code; - sets a runtime environment variable to the Redis service IP address, -- connected to the VPC netowrk, +- connected to the VPC network, - in the target region. This function uses a Redis-backed [rate-limiting library](https://www.npmjs.com/package/redis-rate-limiter) for Node.js. diff --git a/tutorials/cloud-run-golang-gcs-proxy/index.md b/tutorials/cloud-run-golang-gcs-proxy/index.md index c761377afb..8ea2ea0d7e 100644 --- a/tutorials/cloud-run-golang-gcs-proxy/index.md +++ b/tutorials/cloud-run-golang-gcs-proxy/index.md @@ -383,7 +383,7 @@ Here is are some options for approaches that you could take to do this: you automatically get the improvements. This could be slow and expensive if you make a translation each time, but you can add some caching or CDN, so that the translation is only made on cache fills. - This dyanmic server-side approach is the one that is described in this section. + This dynamic server-side approach is the one that is described in this section. Change the `config.go` contents to the following: @@ -394,7 +394,7 @@ func GET(ctx context.Context, output http.ResponseWriter, input *http.Request) { } ``` -`DynamicTranslationFromEnToEs` is a pipeline included in the sample confguration: +`DynamicTranslationFromEnToEs` is a pipeline included in the sample configuration: ```go // EXAMPLE: Translate HTML files from English to Spanish dynamically. diff --git a/tutorials/coral-talk-on-cloud-run/images/architecture.png b/tutorials/coral-talk-on-cloud-run/images/architecture.png new file mode 100644 index 0000000000..89257396a8 Binary files /dev/null and b/tutorials/coral-talk-on-cloud-run/images/architecture.png differ diff --git a/tutorials/coral-talk-on-cloud-run/index.md b/tutorials/coral-talk-on-cloud-run/index.md new file mode 100644 index 0000000000..5c0fcde3bd --- /dev/null +++ b/tutorials/coral-talk-on-cloud-run/index.md @@ -0,0 +1,171 @@ +--- +title: Coral Talk on Google Cloud Run +description: How to deploy Coral Talk on Google Cloud Platform using managed services - Cloud Run and Memorystore. +author: vyolla +tags: cloud-run, memorystore +date_published: 2022-04-04 +--- + +Bruno Patrocinio | Customer Engineer | Google + + +

Contributed by Google employees.

+ +This tutorial describes how to deploy an open-source commenting platform, [Coral Talk](https://docs.coralproject.net/) on Google Cloud Platform using managed services. + +The diagram below shows the general flow: +![architecture](images/architecture.png) + +The instructions are provided for a Linux development environment, such as [Cloud Shell](https://cloud.google.com/shell/). +However, you can also run the application on Google Compute Engine, Kubernetes, a serverless environment, or outside of +Google Cloud. + +This tutorial assumes that you know the basics of the following products and services: + + - [Cloud Run](https://cloud.google.com/run/docs) + - [Container Registry](https://cloud.google.com/container-registry/docs) + - [Memorystore](https://cloud.google.com/memorystore/docs) + - [Compute Engine](https://cloud.google.com/compute/docs) + - [`gcloud`](https://cloud.google.com/sdk/docs) + - [Docker](https://docs.docker.com/engine/reference/commandline/run) + +## Objectives + +* Learn how to create and deploy services using `gcloud` commands. +* Show how to deploy an application with Cloud Run and Memory Store + +## Costs + +This tutorial uses billable components of Google Cloud, including the following: + +* [Cloud Run](https://cloud.google.com/run) +* [Compute Engine](https://cloud.google.com/compute) +* [Memorystore](https://cloud.google.com/memorystore) + +Use the [Pricing Calculator](https://cloud.google.com/products/calculator) to generate a cost estimate based on your +projected usage. + +This tutorial only generates a small amount of Cloud Run requests, which may fall within the free allotment. + + + +## Before you begin + + +For this tutorial, you need a Google Cloud [project](https://cloud.google.com/resource-manager/docs/cloud-platform-resource-hierarchy#projects). +You can create a new one, or you can select a project that you have already created: + +1. Select or create a Google Cloud project. + + [GO TO THE MANAGE RESOURCES PAGE](https://console.cloud.google.com/cloud-resource-manager) + +2. Enable billing for your project. + + [ENABLE BILLING](https://support.google.com/cloud/answer/6293499#enable-billing) + +3. Enable the Cloud Run and Artifact Registry APIs. For details, see [ENABLING APIs](https://cloud.google.com/apis/docs/getting-started#enabling_apis). + +4. Add role *Artifact Registry Service Agent* in service account `[project-id]-compute@developer.gserviceaccount.com` + + +## Detailed steps + + +### Download images and upload to Google Artifact Registry + +#### 1.Artifact Registry +``` +gcloud artifacts repositories create coral-demo --repository-format=docker --location=us-central1 +gcloud auth configure-docker us-central1-docker.pkg.dev +``` + +#### 1. Mongo +``` +docker pull mongo:4.2 +docker tag mongo:4.2 us-central1-docker.pkg.dev/{my-project}/coral-talk/mongo +docker push us-central1-docker.pkg.dev/{my-project}/coral-talk/mongo +``` +#### 2. Coral Talk +``` +docker pull coralproject/talk:6 +docker tag coralproject/talk:6 us-central1-docker.pkg.dev/{my-project}/coral-talk/talk +docker push us-central1-docker.pkg.dev/{my-project}/coral-talk/talk +``` + +### Create VPC Network +``` +gcloud compute networks create coral --project={my-project} \ +--subnet-mode=custom --mtu=1460 --bgp-routing-mode=regional +``` + +``` +gcloud compute networks subnets create talk --project={my-project} \ + --range=10.0.0.0/9 --network=coral --region=us-central1 \ + --secondary-range=serverless=10.130.0.0/28 +``` + +### Create Serverless VPC access +``` +gcloud compute networks vpc-access connectors create talk \ +--region=us-central1 \ +--network=talk \ +--range=10.130.0.0/28 \ +--min-instances=2 \ +--max-instances=3 \ +--machine-type=f1-micro +``` + +### Create Memorystore Redis instance +``` +gcloud redis instances create myinstance --size=2 --region=us-central1 \ + --redis-version=redis_3_2 +``` + +### Create Mongo VM +``` +gcloud compute instances create-with-container instance-1 \ +--project=my-project --zone=us-central1-a --machine-type=f1-micro \ +--network-interface=subnet=talk-subnet-poc,no-address \ +--service-account={my-project}-compute@developer.gserviceaccount.com \ +--boot-disk-size=10GB --container-image=us-central1-docker.pkg.dev/{my-project}/coral-talk/mongo \ +--container-restart-policy=always +``` + +### Create Coral Talk Service in Cloud Run +``` +gcloud run deploy coralproject \ +--image=us-central1-docker.pkg.dev/{my-project}/coral-talk/talk \ +--concurrency=80 \ +--platform=managed \ +--region=us-central1 \ +--project=my-project +``` + +- Add environment variables `MONGODB_URI`, `REDIS_URI` and `SIGNING_SECRET` +- Add VPC Connector + +### Access service url to config Coral Talkl +You've successfully deployed the Mongo and Coral Talk docker containers to Registry, configured your serverless instances to connect directly to your Virtual Private Cloud network, configured a Memorystore Redist instance, and set up a VM using the Mongo container, and deployed Coral Talk Service to Cloud Run. + +## Cleaning up + +To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, you can delete the project. + +Deleting a project has the following consequences: + +- If you used an existing project, you'll also delete any other work that you've done in the project. +- You can't reuse the project ID of a deleted project. If you created a custom project ID that you plan to use in the + future, delete the resources inside the project instead. This ensures that URLs that use the project ID, such as + an `appspot.com` URL, remain available. + +To delete a project, do the following: + +1. In the Cloud Console, go to the [Projects page](https://console.cloud.google.com/iam-admin/projects). +2. In the project list, select the project you want to delete and click **Delete**. +3. In the dialog, type the project ID, and then click **Shut down** to delete the project. + +## What's next + + +- Learn more about [Cloud developer tools](https://cloud.google.com/products/tools). +- Try out other Google Cloud features for yourself. Have a look at our [tutorials](https://cloud.google.com/docs/tutorials). diff --git a/tutorials/create-cloud-build-image-factory-using-packer/cloudbuild.yaml b/tutorials/create-cloud-build-image-factory-using-packer/cloudbuild.yaml deleted file mode 100644 index 4792829d48..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/cloudbuild.yaml +++ /dev/null @@ -1,35 +0,0 @@ -steps: -- name: ubuntu - id: 'create_image_spec' - entrypoint: "bash" - args: - - '-c' - - | - cat <packer.json - { - "builders": [ - { - "image_name": "$(echo helloworld-$TAG_NAME | sed 's/\.//')", - "type": "googlecompute", - "project_id": "$PROJECT_ID", - "source_image_family": "${_IMAGE_FAMILY}", - "image_family": "helloworld", - "ssh_username": "packer", - "zone": "${_IMAGE_ZONE}", - "startup_script_file": "install-website.sh", - "scopes": [ - "https://www.googleapis.com/auth/userinfo.email", - "https://www.googleapis.com/auth/compute", - "https://www.googleapis.com/auth/devstorage.full_control" - ] - } - ] - } - END - -- name: 'gcr.io/$PROJECT_ID/packer' - args: - - build - - -var - - project_id=$PROJECT_ID - - packer.json diff --git a/tutorials/create-cloud-build-image-factory-using-packer/index.md b/tutorials/create-cloud-build-image-factory-using-packer/index.md deleted file mode 100644 index 883d857f64..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/index.md +++ /dev/null @@ -1,475 +0,0 @@ ---- -title: Create a Cloud Build image factory using Packer -description: Learn how to create an image factory using Cloud Build and Packer. -author: johnlabarge,ikwak -tags: Cloud Build, Packer, Compute Engine, Image, Windows, Linux -date_published: 2020-12-15 ---- - -Injae Kwak | Customer Engineer Specialist | Google - -

Contributed by Google employees.

- -This tutorial shows you how to create an image factory using Cloud Build and -[Packer by HashiCorp](https://packer.io). The image factory automatically -creates new images from Cloud Source Repositories each time a new tag is pushed -to the repository, as shown in the following diagram. - -![packer win workflow diagram](https://storage.googleapis.com/gcp-community/tutorials/create-cloud-build-image-factory-using-packer/packer-win-tutorial.png) - -This tutorial includes instructions for creating Packer images for Linux and Windows. - -- For building a Linux image, this tutorial uses Packer to create a new image from a CentOS 7 VM with Nginx. -- For building a Windows image, this tutorial uses Packer to create a new image from a Windows Server 2019 VM with Python 3, Git, and 7-Zip, - using Chocolatey as a package manager. - -Secret Manager is only used for the Windows option. - -## Prerequisites - -- A Google Cloud account -- One of the following: - - Project editor access to an existing project - - Organization permissions to create a new project in an existing organization - -You can run commands in this tutorial using [Cloud Shell](https://cloud.google.com/shell) in the Cloud Console, or you can use `gcloud` on your local computer if -you have installed the Cloud SDK. - -## (Optional) Create a project with a billing account attached - -This section helps you to set up a new Google Cloud project in which to run your Packer -build factory. If you use an existing project for this tutorial, you can skip this section and go to the "Set the project variable" section. - -### Linux - - PROJECT=[NEW PROJECT NAME] - ORG=[YOUR ORGANIZATION NAME] - BILLING_ACCOUNT=[YOUR_BILLING_ACCOUNT_NAME] - ZONE=[COMPUTE ZONE YOU WANT TO USE] - ACCOUNT=[GOOGLE ACCOUNT YOU WANT TO USE] or $(gcloud config get-value account) - - gcloud projects create "$PROJECT" --organization=$(gcloud organizations list --format="value(name)" --filter="(displayName='$ORG')") - gcloud beta billing projects link $PROJECT --billing-account=$(gcloud alpha billing accounts list --format='value(name)' --filter="(displayName='$BILLING_ACCOUNT')") - gcloud config configurations create --activate $PROJECT - gcloud config set project $PROJECT - gcloud config set compute/zone $ZONE - gcloud config set account $ACCOUNT - -### Windows - - $env:PROJECT="NEW PROJECT ID" - $env:ORG="YOUR ORGANIZATION NAME" - $env:BILLING_ACCOUNT="YOUR_BILLING_ACCOUNT_NAME" - $env:ZONE="COMPUTE ZONE YOU WANT TO USE" - $env:ACCOUNT="GOOGLE ACCOUNT YOU WANT TO USE" or $(gcloud config get-value account) - - gcloud projects create "$env:PROJECT" --organization=$(gcloud organizations list --format="value(name)" --filter="(displayName='$env:ORG')") - gcloud beta billing projects link $env:PROJECT --billing-account=$(gcloud alpha billing accounts list --format='value(name)' --filter="(displayName='$env:BILLING_ACCOUNT')") - gcloud config configurations create --activate $env:PROJECT - gcloud config set project $env:PROJECT - gcloud config set compute/zone $env:ZONE - gcloud config set account $env:ACCOUNT - -## (Optional) Set the project variable - -Skip this section if you created a new project. - -If you are using an existing project, set the project variable to indicate which project to use for `gcloud` commands. - -For more information on configurations see [configurations](https://cloud.google.com/sdk/gcloud/reference/config/configurations/). -Replace `[CONFIGURATION NAME]` with the name of the configuration you want to use. - -### Linux - - gcloud config configurations activate [CONFIGURATION NAME] #The configuration for the project you want to use - PROJECT=$(gcloud config get-value project) - -### Windows - - gcloud config configurations activate [CONFIGURATION NAME] #The configuration for the project you want to use - $env:PROJECT=$(gcloud config get-value project) - -## Copy the files for this tutorial to a new working directory and Git repository - -In this section, you download the files to your local environment and initialize Git in the working directory. - -### Linux - -1. Create and go to a new working directory: - - mkdir helloworld-image-factory - cd helloworld-image-factory - -1. Download the tutorial scripts: - - curl -L https://github.com/GoogleCloudPlatform/community/raw/master/tutorials/create-cloud-build-image-factory-using-packer/cloudbuild.yaml >cloudbuild.yaml - - curl -L https://github.com/GoogleCloudPlatform/community/raw/master/tutorials/create-cloud-build-image-factory-using-packer/install-website.sh >install-website.sh - -1. Initialize a Git repository in the working directory: - - git init - -### Windows - -1. Create new working directories using PowerShell: - - New-Item -Name windows-image-factory -ItemType Directory - - Set-Location -Path ./windows-image-factory - - New-Item -Name scripts -ItemType Directory - -1. Download the tutorial scripts to your local environment: - - $baseURL = "https://github.com/GoogleCloudPlatform/community/raw/master/tutorials/create-cloud-build-image-factory-using-packer/windows/" - - $cloudbuildFiles = ("cloudbuild.yaml", "packer.json") - $packerFiles = ("bootstrap-packer.ps1", "cleanup-packer.ps1", "disable-uac.ps1", "install-chocolatey.ps1", "run-chocolatey.ps1") - - # Downloading the remote files - foreach ($file in $cloudbuildFiles){ - Invoke-WebRequest -Uri "$baseURL+$file" -OutFile $file - } - - foreach ($file in $packerFiles){ - Invoke-WebRequest -Uri "$baseURL+'scripts/'+$file" -OutFile $file - } - -1. Initialize a Git repository in the working directory: - - git init - -## Enable the required services - -In this section, you enable the Google Cloud APIs necessary for the tutorial. The required services are the same for Windows and Linux images. - - gcloud services enable sourcerepo.googleapis.com \ - cloudapis.googleapis.com compute.googleapis.com \ - servicemanagement.googleapis.com storage-api.googleapis.com \ - cloudbuild.googleapis.com secretmanager.googleapis.com - -## (Windows image only) Managing secrets for parameters using Secret Manager - -In this section, you use [Secret Manager](https://cloud.google.com/secret-manager) to store your input values for Packer in a secure and modular way. Although -it's easier to simply hard-code parameters into the Packer template file, using a central source of truth like a secret manager increases manageability and -reuseability among teams. - -Create your secrets using the following commands: - - echo -n "windows-2019" | gcloud secrets create image_factory-image_family --replication-policy="automatic" --data-file=- - - echo -n "golden-windows" | gcloud secrets create image_factory-image_name --replication-policy="automatic" --data-file=- - - echo -n "n1-standard-1" | gcloud secrets create image_factory-machine_type --replication-policy="automatic" --data-file=- - - echo -n "us-central1" | gcloud secrets create image_factory-region --replication-policy="automatic" --data-file=- - - echo -n "us-central1-b" | gcloud secrets create image_factory-zone --replication-policy="automatic" --data-file=- - - echo -n "default" | gcloud secrets create image_factory-network --replication-policy="automatic" --data-file=- - - echo -n "allow-winrm-ingress-to-packer" | gcloud secrets create image_factory-tags --replication-policy="automatic" --data-file=- - -Optionally, you can customize the values using the [documentation](https://cloud.google.com/secret-manager/docs/creating-and-accessing-secrets). - -## (Windows image only) Create a new VPC firewall to allow WinRM for Packer - -Before you can provision using the WinRM (Windows Remote Management) communicator, you need to allow traffic through Google's firewall on the WinRM port -(`tcp:5986`). This creates a new firewall called `allow-winrm-ingress-to-packer` that is stored with Secret Manager and used by Cloud Build in the -`cloudbuild.yaml` configuration file. - - gcloud compute firewall-rules create allow-winrm-ingress-to-packer \ - --allow tcp:5986 --target-tags allow-winrm-ingress-to-packer - -## Give the Cloud Build service account permissions through an IAM role - -Find the Cloud Build service account and add the editor role to it (in practice, use least privilege roles). For the Windows image, you also grant the -`secretmanager.secretAccessor` role for [Secret Manager](https://cloud.google.com/secret-manager/docs/access-control). - -### Linux - - CLOUD_BUILD_ACCOUNT=$(gcloud projects get-iam-policy $PROJECT --filter="(bindings.role:roles/cloudbuild.builds.builder)" --flatten="bindings[].members" --format="value(bindings.members[])") - - gcloud projects add-iam-policy-binding $PROJECT \ - --member $CLOUD_BUILD_ACCOUNT \ - --role roles/editor - -### Windows - - $env:CLOUD_BUILD_ACCOUNT=$(gcloud projects get-iam-policy $env:PROJECT --filter="(bindings.role:roles/cloudbuild.builds.builder)" --flatten="bindings[].members" --format="value(bindings.members[])") - - gcloud projects add-iam-policy-binding $env:PROJECT \ - --member $env:CLOUD_BUILD_ACCOUNT \ - --role roles/editor - - gcloud projects add-iam-policy-binding $env:PROJECT \ - --member $env:CLOUD_BUILD_ACCOUNT \ - --role roles/secretsmanager.secretAccessor - -## Create the repository in Cloud Source Repositories for your image creator - -In this section, you commit your Cloud Build configuration file, Packer template, and bootstrap scripts to a repository in Google Cloud to start the Packer -build. - -### Linux - - gcloud source repos create helloworld-image-factory - -### Windows - - gcloud source repos create windows-image-factory - -## Create the build trigger for the image creator source repository - -By configuring a build trigger to the source repository you created in the previous step, you can define a webhook to tell Cloud Build to pull down your -committed files and start the build process automatically. - -### Linux - -Create a trigger on the [build triggers page](https://console.cloud.google.com/cloud-build/triggers) in Cloud Console: - -1. Click **Create Trigger**. -1. In the **Name** field, enter `Hello world image factory`. -1. Under **Event**, select **Push to a tag**. -1. Under **Source**, select `helloworld-image-factory` as your - **Repository** and the tag to match as your tag. -1. Under **Build Configuration**, select **Cloud Build configuration file (yaml or json)**. -1. In the **Cloud Build configuration file location**, enter `cloudbuild.yaml`. -1. Under **Substitution variables**, click **+ Add variable**. -1. In the **Variable** field enter `_IMAGE_FAMILY` and in **Value** enter `centos-7`. -1. In the **Variable** field enter `_IMAGE_ZONE` and in **Value** enter `$ZONE`. -1. Click **Create** to save your build trigger. - -To see a list of image families: - - gcloud compute images list | awk '{print $3}' | awk '!a[$0]++' - -### Windows - -Create a trigger on the [build triggers page](https://console.cloud.google.com/cloud-build/triggers) in Cloud Console: - -1. Click **Create Trigger**. -1. In the **Name** field, enter `Windows image factory`. -1. Under **Event**, select **Push new tag**. -1. Under **Source**, select `windows-image-factory` as your - **Repository** and the tag to match or `.*` (any tag) as your tag. -1. Under **Build Configuration**, select **Cloud Build configuration file (yaml or json)**. -1. In the **Cloud Build configuration file location**, enter `cloudbuild.yaml`. -1. Click **Create** to save your build trigger. - -## Add the Packer Cloud Build image to your project - -Get the builder from the community repository and submit it to your project. This allows Cloud Build to use a Docker container that contains the Packer binaries. - -### Linux - - project_dir=$(pwd) - cd /tmp - git clone https://github.com/GoogleCloudPlatform/cloud-builders-community.git - cd cloud-builders-community/packer - gcloud builds submit --config cloudbuild.yaml - rm -rf /tmp/cloud-builders-community - cd $project_dir - -### Windows - - $env:PROJECT_DIR=$(Get-Location) - New-Item -Path "C:\" -Name "temp" -ItemType Directory - Set-Location -Path "C:\temp" - - git clone https://github.com/GoogleCloudPlatform/cloud-builders-community.git - Set-Location -Path "./cloud-builders-community/packer" - gcloud builds submit --config cloudbuild.yaml - - Remove-Item -Path "C:\temp\cloud-builders-community" -Recurse -Force - Set-Location -Path $env:PROJECT_DIR - -## Add your repository as a remote repository and push - -In this section, you configure the local Git instance to use the repository that you created. - -### Linux - -1. (If running locally, not in Cloud Shell) Set up your Google credentials for Git: - - gcloud init && git config --global credential.https://source.developers.google.com.helper gcloud.sh - -1. Add the `google` repository as a remote: - - git remote add google https://source.developers.google.com/p/$PROJECT/r/helloworld-image-factory - -1. Add your files, tag them with a version number, and push them to your repository: - - git add . - git commit -m "first image" - git tag v0.1 - git push google master --tags - -### Windows - -1. (If running locally, not in Cloud Shell) Set up your Google credentials for Git in PowerShell: - - git config --global "credential.https://source.developers.google.com.helper" gcloud.cmd - -1. Add the `google` repository as a remote: - - git remote add google "https://source.developers.google.com/p/$env:PROJECT/r/windows-image-factory" - -1. Add your files, tag them with a version number, and push them to your repository: - - git add . - git commit -m "first image" - git tag v0.1 - git push google master --tags - -## View build progress - -You can view the standard output from both the staging VM and Packer to check on the build progress. After the Packer build completes successfully, it outputs -the newly created image: - - Step #1: Build 'googlecompute' finished. - Step #1: - Step #1: ==> Builds finished. The artifacts of successful builds are: - Step #1: --> googlecompute: A disk image was created: golden-windows-2020-05-05-554-54 - -Open the [**Cloud Build** page](https://console.cloud.google.com/cloud-build), find the build that is in progress, and click the link to view its progress. - -## Create a Compute Engine instance for the image in your Google Cloud project - -In this section, you test the Compute Engine image that Packer created by creating a new instance. - -### Linux - -1. Create a firewall rule to allow port 80 to test your new instance: - - gcloud compute firewall-rules create http --allow=tcp:80 \ - --target-tags=http-server --source-ranges=0.0.0.0/0 - -1. Create an instance using the new Linux image: - - gcloud compute instances create helloworld-from-factory \ - --image https://www.googleapis.com/compute/v1/projects/$PROJECT/global/images/helloworld-v01 \ - --tags=http-server --zone=$ZONE - -### Windows - -1. Open the [**Compute Engine** page](https://console.cloud.google.com/compute) in Cloud Console and navigate to **Images** to see the new image. - -1. Select the image and click **Create instance**. - -1. Complete the wizard to start the instance, ensuring that **Boot disk** is set to use the new custom image. - - -## Verifying the results - -In this section, you verify that your deployment has worked correctly. - -### Linux - -1. Wait a few minutes and open the browser to the IP address of the instance to see the special message. - -1. Retrieve the instance IP address: - - gcloud compute instances list --filter="name:helloworld*" --format="value(networkInterfaces[0].accessConfigs[0].natIP)" - -1. Go to the IP address in the browser and make sure that you see the `"Hello from the image factory!"` message. - - -### Windows - -1. Wait a few minutes until the Windows VM has completed the boot up process. - -1. [Connect to your instance using RDP.](https://cloud.google.com/compute/docs/instances/connecting-to-instance) - -1. If you need to generate a Windows password, follow - [these instructions](https://cloud.google.com/compute/docs/instances/windows/creating-passwords-for-windows-instances#generating_a_password). - -1. Verify that Git, Python, and 7-Zip have been installed successfully, matching the versions defined in the `packages.config` XML manifest. - - ![verifying packer windows build in cmd](https://storage.googleapis.com/gcp-community/tutorials/create-cloud-build-image-factory-using-packer/task12-windows-verify.png) - -## Cleaning up - -If you don't want to keep the resources after this tutorial, you can delete them. - -### Linux - -1. Delete the firewall rule, the instance, and the image: - - gcloud compute firewall-rules delete --quiet http - gcloud compute instances delete --quiet helloworld-from-factory - gcloud compute images delete --quiet helloworld-v01 - -1. Delete the Packer Cloud Build image: - - gcloud container images delete --quiet gcr.io/$PROJECT/packer --force-delete-tags - -1. Delete the repository: - - gcloud source repos delete --quiet helloworld-image-factory - - Only do this if you don't want to perform the tutorial in this project again. The repository name won't be usable - again for up to 7 days. - -### Windows - -1. Delete the firewall rule, the instance, and the image: - - gcloud compute firewall-rules delete --quiet http - gcloud compute instances delete --quiet helloworld-from-factory - gcloud compute images delete --quiet helloworld-v01 - -1. Delete the Packer Cloud Build image: - - gcloud container images delete --quiet gcr.io/$PROJECT/packer --force-delete-tags - -1. Delete the repository: - - gcloud source repos delete --quiet windows-image-factory - - Only do this if you don't want to perform the tutorial in this project again. The repository name won't be usable - again for up to 7 days. - -## Reference: Windows Packer scripts - -[**`cloudbuild.yaml`**](https://github.com/GoogleCloudPlatform/community/tree/master/tutorials/create-cloud-build-image-factory-using-packer/windows/cloudbuild.yaml) -contains the [build configuration](https://cloud.google.com/cloud-build/docs/build-config) for the Cloud Build service, which uses Packer to build a -new image using instructions within the `packer.json` file. - -[**`windows/packer.json`**](https://github.com/GoogleCloudPlatform/community/tree/master/tutorials/create-cloud-build-image-factory-using-packer/windows/packer.json) -contains the [googlecompute builder template](https://www.packer.io/docs/builders/googlecompute/) for creating a new image for use with Compute Engine. - -Because of the way Packer uses WinRM as the communicator to connect and configure Windows, this template achieves the following: - -- `"variables"` contains placeholder values such as `_PROJECT_ID` that are dynamically changed by Cloud Build sourced from both built-in variables (project) - and custom user variables (Secret Manager). By using `"source_image_family"`, Packer automatically retrieves the latest version available for - the machine image. -- Configures WinRM to use HTTPS for connecting Packer and the staging Windows VM (creates a temporary, local self-signed certificate). -- Using [Compute Engine metadata](https://cloud.google.com/compute/docs/startupscript#providing_a_startup_script_for_windows_instances) - `"windows-startup-script-cmd"`, temporarily creates a new local account `packer_user` on the Windows VM and adds it to local administrator group to provide - permissions for WinRM and installs the desired packages. -- Within the `"provisioners"` section, create a local copy of `packages.config` and `cleanup-packer.ps1` files in the staging Windows VM, to be used by - [Chocolatey](https://chocolatey.org/) and the `"windows-shutdown-script-ps1"` Compute Engine metadata to clean up when finished. -- Still within the `"provisioners"` section, run the PowerShell scripts for bootstrapping your Windows environment using Chocolatey. -- (Optional) You can replace the Chocolatey PowerShell scripts with your own custom bootstrap script, or pull/push configuration management tools such as - Ansible, Puppet, Chef, or PowerShell DSC. -- `GCESysprep -NoShutdown` is called as a way to seal the image using the optional `-NoShutDown` parameter to prevent the Windows environment from shutting - down and create a false positive, unhealthy signal back to Packer. Lifecycle needs to be managed by Packer to complete the image workflow. - -[**`windows/scripts/bootstrap-packer.ps1`**](https://github.com/GoogleCloudPlatform/community/tree/master/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/bootstrap-packer.ps1) -configures Packer to use an HTTPS connection for WinRM to secure communication between the staging VM and Packer host. The configuration made during -this script such as a local certificate, listener, and firewall are deleted by `cleanup-packer.ps1`. - -[**`windows/scripts/cleanup-packer.ps1`**](https://github.com/GoogleCloudPlatform/community/tree/master/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/cleanup-packer.ps1) is invoked as a shutdown script to remove the Chocolatey PowerShell binaries and the local user account for Packer, -undo WinRM configurations, and then remove the shutdown script itself. - -[**`windows/scripts/disable-uac.ps1`**](https://github.com/GoogleCloudPlatform/community/tree/master/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/disable-uac.ps1) installs the latest version of Chocolatey, a package management binary for PowerShell. - -[**`windows/scripts/packages.config`**](https://github.com/GoogleCloudPlatform/community/tree/master/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/packages.config) contains a list of packages in an XML manifest for Chocolatey to install. This is where you can define -[any supported packages](https://chocolatey.org/packages) to install, as well as versioning, options, and switches. For details, see the -[Chocolatey documentation](https://chocolatey.org/docs/commandsinstall#packagesconfig). - -[**`windows/scripts/run-chocolatey.ps1`**](https://github.com/GoogleCloudPlatform/community/tree/master/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/run-chocolatey.ps1) invokes Chocolatey to install the packages defined in the XML manifest, including error handling. Because some Windows -software requires a restart to complete the installation, this script allows it (exit code `3010`) as Packer will shut down and sysprep the image as the final -step. diff --git a/tutorials/create-cloud-build-image-factory-using-packer/install-website.sh b/tutorials/create-cloud-build-image-factory-using-packer/install-website.sh deleted file mode 100644 index 3074f349a6..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/install-website.sh +++ /dev/null @@ -1,12 +0,0 @@ -#!/bin/sh -sudo yum install -y epel-release -sudo yum install -y nginx -sudo chkconfig nginx on -LOCATION_OF_INDEX=/usr/share/nginx/html/index.html -sudo bash -c "cat <$LOCATION_OF_INDEX - -

Hello, from the image factory!

- - -A_VERY_SPECIAL_MESSAGE -" diff --git a/tutorials/create-cloud-build-image-factory-using-packer/packer-tutorial.png b/tutorials/create-cloud-build-image-factory-using-packer/packer-tutorial.png deleted file mode 100644 index cf57b0b894..0000000000 Binary files a/tutorials/create-cloud-build-image-factory-using-packer/packer-tutorial.png and /dev/null differ diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/cloudbuild.yaml b/tutorials/create-cloud-build-image-factory-using-packer/windows/cloudbuild.yaml deleted file mode 100644 index 1568c5ad40..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/windows/cloudbuild.yaml +++ /dev/null @@ -1,41 +0,0 @@ -# Perform a Packer build based on the `packer.json` configuration. This Packer -# build creates a GCE image. -# -# See README.md for invocation instructions. -steps: - # Retrieving secrets and placing into temporary file to reuse in later steps - - name: 'gcr.io/cloud-builders/gcloud' - entrypoint: 'bash' - args: - - '-c' - - | - echo "Retrieving secrets and placing into temporary file to reuse in later steps.." \ - && gcloud secrets versions access latest --secret=image_factory-image_family > image_factory-image_family.txt \ - && gcloud secrets versions access latest --secret=image_factory-image_name > image_factory-image_name.txt \ - && gcloud secrets versions access latest --secret=image_factory-machine_type > image_factory-machine_type.txt \ - && gcloud secrets versions access latest --secret=image_factory-region > image_factory-region.txt \ - && gcloud secrets versions access latest --secret=image_factory-zone > image_factory-zone.txt \ - && gcloud secrets versions access latest --secret=image_factory-network > image_factory-network.txt \ - && gcloud secrets versions access latest --secret=image_factory-tags > image_factory-tags.txt - - # Inject secret into packer template file and invoke packer build - - name: 'gcr.io/$PROJECT_ID/packer' - entrypoint: 'bash' - args: - - '-c' - - | - echo "Injecting secrets into packer template file.." \ - && sed -i "s/_PROJECT_ID/$PROJECT_ID/g" packer.json \ - && sed -i "s/_IMAGE_FAMILY/$(cat image_factory-image_family.txt)/g" packer.json \ - && sed -i "s/_IMAGE_NAME/$(cat image_factory-image_name.txt)/g" packer.json \ - && sed -i "s/_MACHINE_TYPE/$(cat image_factory-machine_type.txt)/g" packer.json \ - && sed -i "s/_REGION/$(cat image_factory-region.txt)/g" packer.json \ - && sed -i "s/_ZONE/$(cat image_factory-zone.txt)/g" packer.json \ - && sed -i "s/_NETWORK/$(cat image_factory-network.txt)/g" packer.json \ - && sed -i "s/_TAGS/$(cat image_factory-tags.txt)/g" packer.json \ - && echo "Invoking packer build.." \ - && packer build -debug -var project_id=$PROJECT_ID packer.json - -tags: ['windows-golden-image'] -timeout: '3600s' - diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/packer-win-tutorial.png b/tutorials/create-cloud-build-image-factory-using-packer/windows/packer-win-tutorial.png deleted file mode 100644 index f00716835a..0000000000 Binary files a/tutorials/create-cloud-build-image-factory-using-packer/windows/packer-win-tutorial.png and /dev/null differ diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/packer.json b/tutorials/create-cloud-build-image-factory-using-packer/windows/packer.json deleted file mode 100644 index 8c3033e509..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/windows/packer.json +++ /dev/null @@ -1,60 +0,0 @@ -{ - "variables": { - "project_id": "_PROJECT_ID", - "source_image_family": "_IMAGE_FAMILY", - "image_name": "_IMAGE_NAME", - "machine_type": "_MACHINE_TYPE", - "region": "_REGION", - "zone": "_ZONE", - "network_id": "_NETWORK", - "network-tags": "_TAGS" - }, - "builders": [{ - "type": "googlecompute", - "project_id": "{{ user `project_id` }}", - "machine_type": "{{ user `machine_type` }}", - "source_image_family": "{{ user `source_image_family` }}", - "region": "{{ user `region` }}", - "zone": "{{ user `zone` }}", - "network": "{{ user `network_id` }}", - "image_description": "{{ user `source_image_family` }}-{{ isotime \"2006-01-02-14-04\" }}", - "image_name": "{{ user `image_name` }}-{{ isotime \"2006-01-02-14-04\" }}", - "disk_size": 100, - "disk_type": "pd-ssd", - "on_host_maintenance": "TERMINATE", - "tags": "{{ user `network-tags` }}", - "communicator": "winrm", - "winrm_insecure": true, - "winrm_use_ssl": true, - "winrm_username": "packer_user", - "metadata": { - "windows-startup-script-cmd": "winrm quickconfig -quiet & net user /add packer_user & net localgroup administrators packer_user /add & winrm set winrm/config/service/auth @{Basic=\"true\"}", - "windows-shutdown-script-ps1": "C:/cleanup-packer.ps1" - } - }], - "provisioners": [{ - "type": "file", - "source": "./scripts/packages.config", - "destination": "C:/packages.config" - }, - { - "type": "file", - "source": "./scripts/cleanup-packer.ps1", - "destination": "C:/cleanup-packer.ps1" - }, - { - "type": "powershell", - "scripts": [ - "./scripts/disable-uac.ps1", - "./scripts/install-chocolatey.ps1", - "./scripts/run-chocolatey.ps1" - ] - }, - { - "type": "powershell", - "inline": [ - "GCESysprep -NoShutdown" - ] - } - ] -} \ No newline at end of file diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/bootstrap-packer.ps1 b/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/bootstrap-packer.ps1 deleted file mode 100644 index 656c449d0e..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/bootstrap-packer.ps1 +++ /dev/null @@ -1,35 +0,0 @@ -# https://docs.microsoft.com/en-us/windows/win32/winrm/winrm-powershell-commandlets - -Write-Output "+++ Running Bootstrap Script for setting up packer +++" - -Set-ExecutionPolicy Unrestricted -Scope LocalMachine -Force -ErrorAction Ignore - -# Don't set this before Set-ExecutionPolicy as it throws an error -$ErrorActionPreference = "stop" - -# Remove HTTP listener and creating a new self-signed cert for packer winrm connection -Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse - -# Creating new selfsigned cert for HTTPS connection -$certName = "packer" -$Cert = New-SelfSignedCertificate -CertstoreLocation Cert:\LocalMachine\My -DnsName $certName -FriendlyName $certName -New-Item -Path WSMan:\LocalHost\Listener -Transport HTTPS -Address * -CertificateThumbPrint $Cert.Thumbprint -Force - -# WinRM stuff -Write-Output "+++ Setting up WinRM+++ " - -#cmd.exe /c winrm quickconfig -q -$firewallRuleName = "WinRM" -cmd.exe /c winrm set "winrm/config" '@{MaxTimeoutms="1800000"}' -cmd.exe /c winrm set "winrm/config/winrs" '@{MaxMemoryPerShellMB="1024"}' -cmd.exe /c winrm set "winrm/config/service" '@{AllowUnencrypted="true"}' -cmd.exe /c winrm set "winrm/config/client" '@{AllowUnencrypted="true"}' -cmd.exe /c winrm set "winrm/config/service/auth" '@{Basic="true"}' -cmd.exe /c winrm set "winrm/config/client/auth" '@{Basic="true"}' -cmd.exe /c winrm set "winrm/config/service/auth" '@{CredSSP="true"}' -cmd.exe /c winrm set "winrm/config/listener?Address=*+Transport=HTTPS" "@{Port=`"5986`";Hostname=`"packer`";CertificateThumbprint=`"$($Cert.Thumbprint)`"}" -cmd.exe /c netsh advfirewall firewall set rule group="remote administration" new enable=yes -cmd.exe /c netsh advfirewall firewall add rule name=$firewallRuleName dir=in protocol=tcp localport=5986 action=allow -Stop-Service -Name winrm -Set-Service -Name winrm -StartupType Auto -Start-Service -Name winrm \ No newline at end of file diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/cleanup-packer.ps1 b/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/cleanup-packer.ps1 deleted file mode 100644 index d78905a97e..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/cleanup-packer.ps1 +++ /dev/null @@ -1,95 +0,0 @@ -function Remove-Chocolatey{ - <# - .SYNOPSIS - This function removes chocolatey binaries and local configs such as env var. - Also removes local copy of packages.config file that was used to bootstrap machine - #> - Write-Output "+++ Deleting Chocolatey package config file +++" - Remove-Item -Path C:\packages.config - - if (!$env:ChocolateyInstall) { - Write-Warning "The ChocolateyInstall environment variable was not found. `n Chocolatey is not detected as installed. Nothing to do" - return - } - if (!(Test-Path "$env:ChocolateyInstall")) { - Write-Warning "Chocolatey installation not detected at '$env:ChocolateyInstall'. `n Nothing to do." - return - } - - $userPath = [Microsoft.Win32.Registry]::CurrentUser.OpenSubKey('Environment').GetValue('PATH', '', [Microsoft.Win32.RegistryValueOptions]::DoNotExpandEnvironmentNames).ToString() - $machinePath = [Microsoft.Win32.Registry]::LocalMachine.OpenSubKey('SYSTEM\CurrentControlSet\Control\Session Manager\Environment\').GetValue('PATH', '', [Microsoft.Win32.RegistryValueOptions]::DoNotExpandEnvironmentNames).ToString() - - Write-Output "User PATH: " + $userPath | Out-File "C:\PATH_backups_ChocolateyUninstall.txt" -Encoding UTF8 -Force - Write-Output "Machine PATH: " + $machinePath | Out-File "C:\PATH_backups_ChocolateyUninstall.txt" -Encoding UTF8 -Force - - if ($userPath -like "*$env:ChocolateyInstall*") { - Write-Output "Chocolatey Install location found in User Path. Removing..." - # WARNING: This could cause issues after reboot where nothing is - # found if something goes wrong. In that case, look at the backed up - # files for PATH. - [System.Text.RegularExpressions.Regex]::Replace($userPath, [System.Text.RegularExpressions.Regex]::Escape("$env:ChocolateyInstall\bin") + '(?>;)?', '', [System.Text.RegularExpressions.RegexOptions]::IgnoreCase) | %{[System.Environment]::SetEnvironmentVariable('PATH', $_.Replace(";;",";"), 'User')} - } - - if ($machinePath -like "*$env:ChocolateyInstall*") { - Write-Output "Chocolatey Install location found in Machine Path. Removing..." - # WARNING: This could cause issues after reboot where nothing is - # found if something goes wrong. In that case, look at the backed up - # files for PATH. - [System.Text.RegularExpressions.Regex]::Replace($machinePath, [System.Text.RegularExpressions.Regex]::Escape("$env:ChocolateyInstall\bin") + '(?>;)?', '', [System.Text.RegularExpressions.RegexOptions]::IgnoreCase) | %{[System.Environment]::SetEnvironmentVariable('PATH', $_.Replace(";;",";"), 'Machine')} - } - - # Adapt for any services running in subfolders of ChocolateyInstall - $agentService = Get-Service -Name chocolatey-agent -ErrorAction SilentlyContinue - if ($agentService -and $agentService.Status -eq 'Running') { $agentService.Stop() } - # TODO: add other services here - - # delete the contents (remove -WhatIf to actually remove) - Remove-Item -Recurse -Force "$env:ChocolateyInstall" -WhatIf - - [System.Environment]::SetEnvironmentVariable("ChocolateyInstall", $null, 'User') - [System.Environment]::SetEnvironmentVariable("ChocolateyInstall", $null, 'Machine') - [System.Environment]::SetEnvironmentVariable("ChocolateyLastPathUpdate", $null, 'User') - [System.Environment]::SetEnvironmentVariable("ChocolateyLastPathUpdate", $null, 'Machine') -} - -function Remove-PackerUser{ - <# - .SYNOPSIS - This removes the local packer_user account used for packer winRM connection - #> - param( - [String] $userAccount # default, packer_user - ) - Write-Output "+++ Removing local user account for packer +++" - Remove-LocalUser -Name $userAccount -} - -function Remove-WinRMConfig { - <# - .SYNOPSIS - This undos the winrm config set up for packer. Removes local cert, listener, firewall rules and disables windows service from starting - #> - - Write-Output "+++ Removing Packer WinRM and required configs +++" - # Remove HTTP listener and deleting the self-signed cert for packer winrm connection - Remove-Item -Path WSMan:\Localhost\listener\listener* -Recurse - # Deleting selfsigned cert used for HTTPS connection - $certName = "packer" - Get-ChildItem -Path Cert:\LocalMachine\My | Where-Object { $_.FriendlyName -like $certName } | Remove-Item - # Closing WinRM HTTPS firewall - $firewallRuleName = "WinRM" - Remove-NetFirewallRule -DisplayName $firewallRuleName - Write-Output "+++ Disabling WinRM +++" - Disable-PSRemoting - # Disabling local winrm service from auto starting - Stop-Service -Name winrm - Set-Service -Name winrm -StartupType Manual -} - -# Kick off clean up script - -Remove-Chocolatey -Remove-PackerUser -userAccount "packer_user" -Remove-WinRMConfig -# Finally, delete the cleanup script itself -Remove-Item -Path $MyInvocation.MyCommand.Source -Force diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/disable-uac.ps1 b/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/disable-uac.ps1 deleted file mode 100644 index 212839377a..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/disable-uac.ps1 +++ /dev/null @@ -1,4 +0,0 @@ -Write-Output "+++ Disabling UAC… +++" - -New-ItemProperty -Path HKLM:Software\Microsoft\Windows\CurrentVersion\Policies\System -Name EnableLUA -PropertyType DWord -Value 0 -Force -New-ItemProperty -Path HKLM:Software\Microsoft\Windows\CurrentVersion\Policies\System -Name ConsentPromptBehaviorAdmin -PropertyType DWord -Value 0 -Force \ No newline at end of file diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/install-chocolatey.ps1 b/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/install-chocolatey.ps1 deleted file mode 100644 index b42eb9127e..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/install-chocolatey.ps1 +++ /dev/null @@ -1,4 +0,0 @@ -# ./scripts/chocolatey.ps1 -# Install Chocolatey -Write-Output "+++ Installing Chocolatey… +++" -Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) \ No newline at end of file diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/packages.config b/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/packages.config deleted file mode 100644 index a0163c72ab..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/packages.config +++ /dev/null @@ -1,6 +0,0 @@ - - - - - - \ No newline at end of file diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/run-chocolatey.ps1 b/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/run-chocolatey.ps1 deleted file mode 100644 index 7a6a5c27a7..0000000000 --- a/tutorials/create-cloud-build-image-factory-using-packer/windows/scripts/run-chocolatey.ps1 +++ /dev/null @@ -1,30 +0,0 @@ -Write-Output "+++ Running Chocolatey… +++" - -# clean exit or reboot pending: https://chocolatey.org/docs/commandsinstall#exit-codes -$validExitCodes = 0, 3010 - -# Globally Auto confirm every action -$commandAutoConfirm = 'choco feature enable -n allowGlobalConfirmation' -$commandInstall = 'choco install C:\packages.config -y --no-progress' - -try -{ - Invoke-Expression -Command $commandAutoConfirm - Invoke-Expression -Command $commandInstall - - if ($LASTEXITCODE -notin $validExitCodes) - { - throw "Error encountered during package installation with status: $($LASTEXITCODE)" - } - else - { - Write-Output "" - Write-Output "Chocolatey packages has been installed successfully!" - } -} -catch -{ - throw "Error encountered during chocolatey operation: $($Error[0].Exception)" -} - - diff --git a/tutorials/create-cloud-build-image-factory-using-packer/windows/task12-windows-verify.png b/tutorials/create-cloud-build-image-factory-using-packer/windows/task12-windows-verify.png deleted file mode 100644 index 274c12e81a..0000000000 Binary files a/tutorials/create-cloud-build-image-factory-using-packer/windows/task12-windows-verify.png and /dev/null differ diff --git a/tutorials/datacatalog-tag-history/index.md b/tutorials/datacatalog-tag-history/index.md index 08f40c7548..9c072424de 100644 --- a/tutorials/datacatalog-tag-history/index.md +++ b/tutorials/datacatalog-tag-history/index.md @@ -10,7 +10,7 @@ Anant Damle | Solutions Architect | Google

Contributed by Google employees.

-This solution is intended for technical practitioners—such as data engineers and analysts—who are responsibile for metadata management, data governance, and +This solution is intended for technical practitioners—such as data engineers and analysts—who are responsible for metadata management, data governance, and related analytics. Historical metadata about your data warehouse is a treasure trove for discovering insights about changing data patterns, data quality, and user behavior. The diff --git a/tutorials/dataflow-dlp-to-datacatalog-tags/pom.xml b/tutorials/dataflow-dlp-to-datacatalog-tags/pom.xml index a5855266fe..c8ea10e0e4 100644 --- a/tutorials/dataflow-dlp-to-datacatalog-tags/pom.xml +++ b/tutorials/dataflow-dlp-to-datacatalog-tags/pom.xml @@ -62,7 +62,7 @@ 3.16.3 3.0.1 28.0-jre - 20160810 + 20230227 5.5.2 2.8.9 2.14 diff --git a/tutorials/deploy-dependency-track/index.md b/tutorials/deploy-dependency-track/index.md index 1b12684381..4e3bb4d0a0 100644 --- a/tutorials/deploy-dependency-track/index.md +++ b/tutorials/deploy-dependency-track/index.md @@ -25,7 +25,7 @@ This kind of system is useful in a number of scenarios: - Teams building and deploying software can submit SBOMs when new versions are deployed. - You can manually list dependencies for legacy systems. -Using Dependency-Track helps you to monitor and respond to vulnerabilites in components in your systems. +Using Dependency-Track helps you to monitor and respond to vulnerabilities in components in your systems. [Using components with known vulnerabilities](https://owasp.org/www-project-top-ten/2017/A9_2017-Using_Components_with_Known_Vulnerabilities) is one of the [top 10 web application security risks](https://owasp.org/www-project-top-ten/) identified by the Open Web Application Security Project (OWASP). If you have an inventory of components in use across your environment, then you can use resources such as the @@ -806,7 +806,7 @@ including the following: - **Use security and operations services**: Consider tools such as [Cloud Armor](https://cloud.google.com/armor) and [Google Cloud's operations suite](https://cloud.google.com/products/operations) for the ongoing security and operation of your system. -Having a model to track dependecies is a great first step. Configuring the system to notify you when a vulnerability pops up is even better. Check out the +Having a model to track dependencies is a great first step. Configuring the system to notify you when a vulnerability pops up is even better. Check out the [Dependency-Track notifications](https://docs.dependencytrack.org/integrations/notifications/) document for options. The webhooks model is a useful approach to automating responses. Also consider your processes and how your organization will respond when a vulnerability is reported. diff --git a/tutorials/deploy-ha-vpn-with-terraform/index.md b/tutorials/deploy-ha-vpn-with-terraform/index.md index 4d100ce95e..afc6a540c0 100644 --- a/tutorials/deploy-ha-vpn-with-terraform/index.md +++ b/tutorials/deploy-ha-vpn-with-terraform/index.md @@ -21,10 +21,10 @@ configuration on Google Cloud. ## Before you begin * This guide assumes that you are familiar with [Terraform](https://cloud.google.com/docs/terraform). Instructions provided in this guide - are based on the Google Cloud envrionment depicted in the + are based on the Google Cloud environment depicted in the [HA VPN interop guides](https://cloud.google.com/vpn/docs/how-to/interop-guides) and are only for testing purposes. -* See [Getting started with Terraform on Google Cloud](https://cloud.google.com/community/tutorials/getting-started-on-gcp-with-terraform) to set up your Terraform envrionment for Google Cloud. +* See [Getting started with Terraform on Google Cloud](https://cloud.google.com/community/tutorials/getting-started-on-gcp-with-terraform) to set up your Terraform environment for Google Cloud. * Ensure the you have a [service account](https://cloud.google.com/iam/docs/creating-managing-service-accounts) with [sufficient permissions](https://cloud.google.com/vpn/docs/how-to/creating-ha-vpn2#permissions) to deploy the resources @@ -40,7 +40,7 @@ configuration on Google Cloud. cd community/tutorials/deploy-ha-vpn-with-terraform/terraform -1. (optional) Change variable values in `gcp_variables.tf` for your envrionment. +1. (optional) Change variable values in `gcp_variables.tf` for your environment. 1. Run the following Terraform commands: diff --git a/tutorials/dlp-hybrid-inspect/src/main/java/com/example/dlp/HybridInspectSql.java b/tutorials/dlp-hybrid-inspect/src/main/java/com/example/dlp/HybridInspectSql.java index 4ec119800c..e9d4fc061c 100644 --- a/tutorials/dlp-hybrid-inspect/src/main/java/com/example/dlp/HybridInspectSql.java +++ b/tutorials/dlp-hybrid-inspect/src/main/java/com/example/dlp/HybridInspectSql.java @@ -304,7 +304,7 @@ private static Integer inspectSQLDb( System.out.println(); System.out.print(String.format(">> [%s,%s:%s]: Starting Inspection", database.databaseInstanceDescription, database.databaseInstanceServer, database.databaseName)); - //retreive the password from Secret Manager + //retrieve the password from Secret Manager final String databasePassword = accessSecretVersion(ServiceOptions.getDefaultProjectId(), database.getSecretManagerResourceName(),"latest"); @@ -323,7 +323,7 @@ private static Integer inspectSQLDb( String dbVersion = String.format("%s[%s]", dbMetadata.getDatabaseProductName(), dbMetadata.getDatabaseProductVersion()); - // this will list out all tables in the curent schama + // this will list out all tables in the current schama ResultSet ListTablesResults = dbMetadata .getTables(conn.getCatalog(), null, "%", new String[]{"TABLE"}); @@ -540,10 +540,10 @@ public static String accessSecretVersion(String projectId, String secretId, Stri } /** - * Because this script may be connecting to mulitple JDBC drivers in the same run, this method helps ensure that the drivers are registered + * Because this script may be connecting to multiple JDBC drivers in the same run, this method helps ensure that the drivers are registered */ private static java.sql.Driver getJdbcDriver (String databaseType){ - // Based on the SQL database type, reguster the driver. Note the pom.xml must have a + // Based on the SQL database type, register the driver. Note the pom.xml must have a // matching driver for these to work. This addresses driver not found issues when // trying to scan more than one JDBC type. try { diff --git a/tutorials/dlp-to-datacatalog-tags/src/main/java/com/example/dlp/DlpDataCatalogTagsTutorial.java b/tutorials/dlp-to-datacatalog-tags/src/main/java/com/example/dlp/DlpDataCatalogTagsTutorial.java index d34d197ae0..de189bd776 100644 --- a/tutorials/dlp-to-datacatalog-tags/src/main/java/com/example/dlp/DlpDataCatalogTagsTutorial.java +++ b/tutorials/dlp-to-datacatalog-tags/src/main/java/com/example/dlp/DlpDataCatalogTagsTutorial.java @@ -496,7 +496,7 @@ private static List getMaxRows(List rows, int startRow, int headerCount) throws return subRows; } - // this methods calcualtes the total bytes of a list of rows. + // this methods calculates the total bytes of a list of rows. public static int getBytesFromList(List list) throws IOException { java.io.ByteArrayOutputStream baos = new java.io.ByteArrayOutputStream(); java.io.ObjectOutputStream out = new java.io.ObjectOutputStream(baos); diff --git a/tutorials/docker-gcplogs-driver/index.md b/tutorials/docker-gcplogs-driver/index.md index 20c22f7a82..9e3df1a17e 100644 --- a/tutorials/docker-gcplogs-driver/index.md +++ b/tutorials/docker-gcplogs-driver/index.md @@ -174,7 +174,7 @@ you can skip this section.* From [Cloud Shell](https://cloud.google.com/shell/docs/quickstart) or a development machine where you have [installed and initialized the Cloud SDK](https://cloud.google.com/sdk/docs/), -use the [gcloud compute intances add-metadata](https://cloud.google.com/sdk/gcloud/reference/compute/instances/add-metadata) +use the [gcloud compute instances add-metadata](https://cloud.google.com/sdk/gcloud/reference/compute/instances/add-metadata) command to add the `user-data` key to your instance. 1. Create a file `instance-config.txt` with the following contents: diff --git a/tutorials/elixir-phoenix-on-kubernetes-google-container-engine/index.md b/tutorials/elixir-phoenix-on-kubernetes-google-container-engine/index.md index 281e637bbd..38fa245199 100644 --- a/tutorials/elixir-phoenix-on-kubernetes-google-container-engine/index.md +++ b/tutorials/elixir-phoenix-on-kubernetes-google-container-engine/index.md @@ -702,7 +702,7 @@ building a new image and pointing your deployment to it. kubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello:v1 **Note:** If a deployment gets stuck because an error in the image prevents -it from starting successfuly, you can recover by undoing the rollout. See the +it from starting successfully, you can recover by undoing the rollout. See the [Kubernetes deployment documentation](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) for more info. diff --git a/tutorials/enforce-an-identity-root-of-trust-in-your-gcp-environment/index.md b/tutorials/enforce-an-identity-root-of-trust-in-your-gcp-environment/index.md index f291162d47..e8df337421 100644 --- a/tutorials/enforce-an-identity-root-of-trust-in-your-gcp-environment/index.md +++ b/tutorials/enforce-an-identity-root-of-trust-in-your-gcp-environment/index.md @@ -68,7 +68,7 @@ authorization of Google Cloud resources is governed by Cloud IAM policies. You can provision users and groups in the [Admin console](https://admin.google.com), which lets you manage settings and identities for Google Cloud products like Google Cloud Platform and G Suite. If your users only need access to Google Cloud Platform, you can give them [Cloud Identity licences](https://support.google.com/cloudidentity/answer/7384684), which exist -in the Free and Premium tiers. There is a default limit of 50 free users for each Cloud Identity account. To rasie this +in the Free and Premium tiers. There is a default limit of 50 free users for each Cloud Identity account. To raise this limit, contact Google Cloud support. To manage your identities within the Admin console, you can designate diff --git a/tutorials/event-driven-serverless-scheduling-framework-dlp/index.md b/tutorials/event-driven-serverless-scheduling-framework-dlp/index.md index 204b9fb82e..6822683ffd 100644 --- a/tutorials/event-driven-serverless-scheduling-framework-dlp/index.md +++ b/tutorials/event-driven-serverless-scheduling-framework-dlp/index.md @@ -61,7 +61,7 @@ The following diagram shows the architecture of the solution: - The first topic is used by Cloud Scheduler to start a scheduled job. - The second topic is used by the Cloud DLP API to notify when a scanning job is complete. -1. Create two Cloud Functions with the trigger type **Cloud Pub/Sub** by following the instructons in the +1. Create two Cloud Functions with the trigger type **Cloud Pub/Sub** by following the instructions in the [Cloud Functions quickstart guide](https://cloud.google.com/functions/docs/quickstart-python). - Make the first Cloud Function subscribe to the first Pub/Sub topic so that the function is triggered when Cloud Scheduler starts a scheduled job. Add both diff --git a/tutorials/exporting-stackdriver-elasticcloud/index.md b/tutorials/exporting-stackdriver-elasticcloud/index.md index e3d46a0142..b916363441 100644 --- a/tutorials/exporting-stackdriver-elasticcloud/index.md +++ b/tutorials/exporting-stackdriver-elasticcloud/index.md @@ -34,7 +34,7 @@ The high-level steps in this section: Log in or sign up for [Google Cloud](https://cloud.google.com), then open the [Cloud Console](https://console.cloud.google.com). -The examples in this document use the `gcloud` command-line inteface. Google Cloud APIs must be enabled through the +The examples in this document use the `gcloud` command-line interface. Google Cloud APIs must be enabled through the [Services and APIs page](https://console.cloud.google.com/apis/dashboard) in the console before they can be used with `gcloud`. To perform the steps in this tutorial, enable the following APIs: diff --git a/tutorials/gcp-cos-basic-fim/scan.sh b/tutorials/gcp-cos-basic-fim/scan.sh index e1bc2f1c61..2dbc4eebc3 100755 --- a/tutorials/gcp-cos-basic-fim/scan.sh +++ b/tutorials/gcp-cos-basic-fim/scan.sh @@ -79,7 +79,7 @@ mkdir -p $DATDIR $TMPDIR $LOGDIR # Fail fast if already running if [ -f "$LOCKFILE" ];then - echo "A scan is already in progess." | tee -a $LOGFILE + echo "A scan is already in progress." | tee -a $LOGFILE exit fi touch $LOCKFILE diff --git a/tutorials/generate-logs-scale/index.md b/tutorials/generate-logs-scale/index.md index ac30151cad..556c53b5c4 100644 --- a/tutorials/generate-logs-scale/index.md +++ b/tutorials/generate-logs-scale/index.md @@ -85,7 +85,7 @@ It may take a few minutes for the APIs to be enabled. ## Set environment variables -Run the following comands in the Cloud Console to set environment variables. Replace the values for `PROJECT_ID`, +Run the following commands in the Cloud Console to set environment variables. Replace the values for `PROJECT_ID`, `TOPIC_NAME`, `SUBSCRIPTION_NAME`, and `CLUSTER_NAME`. REGION=us-central1 @@ -222,7 +222,7 @@ After the test starts, you can see statistics in Locust, as shown in the followi ## Monitor GKE cluster utilization -To check the utilization of you GKE cluster in the Kubenetes Monitoring dashboard, go to +To check the utilization of you GKE cluster in the Kubernetes Monitoring dashboard, go to **Menu > Monitoring > Dashboards > Kubernetes Engine**. [Go to the Kubernetes Engine dashboard.](https://console.cloud.google.com/monitoring/dashboards/resourceList/kubernetes) diff --git a/tutorials/generate-logs-scale/locust-docker-image/locust-tasks/requirements.txt b/tutorials/generate-logs-scale/locust-docker-image/locust-tasks/requirements.txt index 166f82b745..dd5d796c95 100644 --- a/tutorials/generate-logs-scale/locust-docker-image/locust-tasks/requirements.txt +++ b/tutorials/generate-logs-scale/locust-docker-image/locust-tasks/requirements.txt @@ -1,7 +1,7 @@ certifi==2022.12.7 chardet==3.0.4 Click==7.0 -Flask==1.0.2 +Flask==2.3.2 gevent==1.4.0 greenlet==0.4.15 idna==2.8 @@ -12,7 +12,7 @@ MarkupSafe==1.1.1 msgpack==0.6.1 msgpack-python==0.5.6 pyzmq==18.0.1 -requests==2.21.0 +requests==2.31.0 six==1.12.0 urllib3==1.26.5 Werkzeug==2.2.3 diff --git a/tutorials/github-auto-assign-reviewers-cloud-functions/index.md b/tutorials/github-auto-assign-reviewers-cloud-functions/index.md index d0330b6c69..d627c3d084 100644 --- a/tutorials/github-auto-assign-reviewers-cloud-functions/index.md +++ b/tutorials/github-auto-assign-reviewers-cloud-functions/index.md @@ -15,7 +15,7 @@ opened. The Cloud Function is implemented in [Node.js][node]. The sample Cloud Function is triggered by webhook request from GitHub when a pull request is opened, and then attempts to assign to the pull request the reviewer with the smallest review workload from a supplied list of eligible -reviewers. The review workload of the eligble reviewers is inferred from the +reviewers. The review workload of the eligible reviewers is inferred from the reviews that have already been assigned to them on other open pull requests in the repository. @@ -137,7 +137,7 @@ const url = require('url'); const settings = require('./settings.json'); /** - * Assigns a reviewer to a new pull request from a list of eligble reviewers. + * Assigns a reviewer to a new pull request from a list of eligible reviewers. * Reviewers with the least assigned reviews on open pull requests will be * prioritized for assignment. * @@ -222,7 +222,7 @@ function validateRequest (req) { ### Retrieving all open pull requests -In order to figure out how many pull requests the eligible recievers are already +In order to figure out how many pull requests the eligible receivers are already reviewing, you need to retrieve all of the repository's open pull requests. Add a GitHub API helper function to your `index.js` file: @@ -323,8 +323,8 @@ function getReviewsForPullRequests (pullRequests) { ### Calculating the current workloads of all reviewers Now that you have the open pull requests and their reviews, you can calculate -the current review workload of eligble receivers. The following function figures -out how many reviews are already assigned to the eligble reviewers. It then +the current review workload of eligible receivers. The following function figures +out how many reviews are already assigned to the eligible reviewers. It then sorts the reviewers by least-assigned reviews to most-assigned reviews. Add it to your `index.js` file: diff --git a/tutorials/gke-networking-fundamentals/index.md b/tutorials/gke-networking-fundamentals/index.md index a264fcd037..3badfbd4f8 100644 --- a/tutorials/gke-networking-fundamentals/index.md +++ b/tutorials/gke-networking-fundamentals/index.md @@ -698,7 +698,7 @@ to get an understanding of how we might answer the following types of questions: - Which bridge in the host's default namespace is the pod attached to? - Which port on the bridge is the pod's MAC address learned on? -- What is each namepace's next hop to the other? +- What is each namespace's next hop to the other? #### Exposing the container diff --git a/tutorials/gke-node-agent-metrics-cloud-monitoring/index.md b/tutorials/gke-node-agent-metrics-cloud-monitoring/index.md index 13e7e7e046..7bc68d3d90 100644 --- a/tutorials/gke-node-agent-metrics-cloud-monitoring/index.md +++ b/tutorials/gke-node-agent-metrics-cloud-monitoring/index.md @@ -50,7 +50,7 @@ needs. The files for this tutorial are in the [`/tutorials/gke-node-agent-metrics-cloud-monitoring`](https://github.com/GoogleCloudPlatform/community/blob/master/tutorials/gke-node-agent-metrics-cloud-monitoring) directory. -## Build the container iamge +## Build the container image 1. Update `cloudbuild.yaml` by replacing the following values: diff --git a/tutorials/https-load-balancing-nginx/index.md b/tutorials/https-load-balancing-nginx/index.md index d8ed87f32a..ec153e18d7 100644 --- a/tutorials/https-load-balancing-nginx/index.md +++ b/tutorials/https-load-balancing-nginx/index.md @@ -402,7 +402,7 @@ best practices. To harden your SSL/TLS configuration: 1. Set the `ssl_prefer_server_ciphers` directive to specify that server ciphers - should be prefered over client ciphers: + should be preferred over client ciphers: ssl_prefer_server_ciphers on; diff --git a/tutorials/jmeter-spanner-performance-test/index.md b/tutorials/jmeter-spanner-performance-test/index.md index fc1c9bc2e6..5f7698f47b 100644 --- a/tutorials/jmeter-spanner-performance-test/index.md +++ b/tutorials/jmeter-spanner-performance-test/index.md @@ -260,7 +260,7 @@ The following screnshot shows an example thread group configuration: ![drawing](https://storage.googleapis.com/gcp-community/tutorials/jmeter-spanner-performance-test/03_thread_groups.png) -If you want a thread group to run for a given duration, then you can change the beahvior as shown in the following screenshot: +If you want a thread group to run for a given duration, then you can change the behavior as shown in the following screenshot: ![drawing](https://storage.googleapis.com/gcp-community/tutorials/jmeter-spanner-performance-test/04_thread_groups_2.png) diff --git a/tutorials/julia-jupyter-notebook-server/index.md b/tutorials/julia-jupyter-notebook-server/index.md index b5603d88f1..5235381a6f 100644 --- a/tutorials/julia-jupyter-notebook-server/index.md +++ b/tutorials/julia-jupyter-notebook-server/index.md @@ -26,7 +26,7 @@ concerns in compute intensive problem domains. Jupyter notebooks are an increasingly common mechanism for collaboration around, and delivery of, scientific information processing solutions. While originally constructed around Python, Jupyter now supports the installation of additional "kernels", *e.g.* R, -Scala, and Julia. While this tutorial is specifc to Julia, it would be easy to +Scala, and Julia. While this tutorial is specific to Julia, it would be easy to modify to add a different kernel to the resulting notebook server. ## Objectives diff --git a/tutorials/kotlin-springboot-compute-engine.md b/tutorials/kotlin-springboot-compute-engine.md index b3ec21f6ba..4a040c3eca 100644 --- a/tutorials/kotlin-springboot-compute-engine.md +++ b/tutorials/kotlin-springboot-compute-engine.md @@ -156,7 +156,7 @@ and copy the following content to it: #!/bin/sh - # Set the metadata server to the get projct id + # Set the metadata server to the get project id PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google") BUCKET=$(curl -s "http://metadata.google.internal/computeMetadata/v1/instance/attributes/BUCKET" -H "Metadata-Flavor: Google") diff --git a/tutorials/kubernetes-engine-customize-fluentbit/index.md b/tutorials/kubernetes-engine-customize-fluentbit/index.md index e6a81a5c3a..3775ffe619 100644 --- a/tutorials/kubernetes-engine-customize-fluentbit/index.md +++ b/tutorials/kubernetes-engine-customize-fluentbit/index.md @@ -204,7 +204,7 @@ In this section, you change `kubernetes/fluentbit-daemonset.yaml` to mount the ` kubectl rollout status ds/fluent-bit --namespace=logging - When it completes, you should see the follwoing message: + When it completes, you should see the following message: daemon set "fluent-bit" successfully rolled out diff --git a/tutorials/ml-pipeline-with-workflows/babyweight_model/trainer/model.py b/tutorials/ml-pipeline-with-workflows/babyweight_model/trainer/model.py index e11f0c303d..0b304d3b7d 100644 --- a/tutorials/ml-pipeline-with-workflows/babyweight_model/trainer/model.py +++ b/tutorials/ml-pipeline-with-workflows/babyweight_model/trainer/model.py @@ -49,7 +49,7 @@ def read_dataset(data_dir, prefix, pattern, batch_size=512, eval=False): def get_wide_deep(): - # defin model inputs + # define model inputs inputs = {} inputs['is_male'] = layers.Input(shape=(), name='is_male', dtype='string') inputs['plurality'] = layers.Input(shape=(), name='plurality', dtype='string') diff --git a/tutorials/ml-pipeline-with-workflows/services/preprocess/main.py b/tutorials/ml-pipeline-with-workflows/services/preprocess/main.py index 843419ef0f..57d809a56a 100644 --- a/tutorials/ml-pipeline-with-workflows/services/preprocess/main.py +++ b/tutorials/ml-pipeline-with-workflows/services/preprocess/main.py @@ -33,7 +33,7 @@ @app.route('/') def index(): - return 'A service to Submit a traing job for the babyweight-keras example. ' + return 'A service to Submit a training job for the babyweight-keras example. ' @app.route('/api/v1/job/', methods=['GET']) diff --git a/tutorials/ml-pipeline-with-workflows/services/train/main.py b/tutorials/ml-pipeline-with-workflows/services/train/main.py index 784cea3875..f4e1f26ba2 100644 --- a/tutorials/ml-pipeline-with-workflows/services/train/main.py +++ b/tutorials/ml-pipeline-with-workflows/services/train/main.py @@ -33,7 +33,7 @@ @app.route('/') def index(): - return 'A service to Submit a traing job for the babyweight-keras example. ' + return 'A service to Submit a training job for the babyweight-keras example. ' @app.route('/api/v1/job/', methods=['GET']) diff --git a/tutorials/nginx-ingress-gke/index.md b/tutorials/nginx-ingress-gke/index.md index 194aa309f2..185556df23 100644 --- a/tutorials/nginx-ingress-gke/index.md +++ b/tutorials/nginx-ingress-gke/index.md @@ -285,7 +285,7 @@ method can also be forced by setting the annotation's value to `gce`: Deploying multiple Ingress controllers of different types (for example, both `nginx` and `gce`) and not specifying a class annotation will result in all controllers fighting to satisfy the Ingress, and all of them racing to update the Ingress status field in confusing ways. For more information, see -[Multipe Ingress controllers](https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/). +[Multiple Ingress controllers](https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/). 1. Create a simple Ingress Resource YAML file that uses the NGINX Ingress Controller and has one path rule defined: diff --git a/tutorials/pci-tokenizer/examples/requirements.txt b/tutorials/pci-tokenizer/examples/requirements.txt index a743bbe341..2c24336eb3 100644 --- a/tutorials/pci-tokenizer/examples/requirements.txt +++ b/tutorials/pci-tokenizer/examples/requirements.txt @@ -1 +1 @@ -requests==2.27.1 +requests==2.31.0 diff --git a/tutorials/pci-tokenizer/index.js b/tutorials/pci-tokenizer/index.js index 92de9d8388..e87be544d5 100644 --- a/tutorials/pci-tokenizer/index.js +++ b/tutorials/pci-tokenizer/index.js @@ -1,2 +1,2 @@ -// Boostrap for Cloud Functions +// Bootstrap for Cloud Functions require('./src/server.js'); diff --git a/tutorials/pci-tokenizer/package.json b/tutorials/pci-tokenizer/package.json index 558fe0c76d..4b6ae602f0 100644 --- a/tutorials/pci-tokenizer/package.json +++ b/tutorials/pci-tokenizer/package.json @@ -11,6 +11,7 @@ "dependencies": { "@google-cloud/dlp": "^1.9", "@google-cloud/kms": "^1.6", + "body-parser": "^1.20.2", "config": "^3.2", "express": "^4.17", "google-auth-library": "^5.7", diff --git a/tutorials/pci-tokenizer/src/app.js b/tutorials/pci-tokenizer/src/app.js index 4e26429248..c18df41535 100644 --- a/tutorials/pci-tokenizer/src/app.js +++ b/tutorials/pci-tokenizer/src/app.js @@ -1,5 +1,5 @@ /** -Main applicaiton script for the card data tokenizer. Called by server.js. +Main application script for the card data tokenizer. Called by server.js. See ../index.md for usage info and Apache 2.0 license */ diff --git a/tutorials/prestashop-gke/php-nginx/7.3-fpm-alpine/config/php-fpm/php-fpm.d/www.conf b/tutorials/prestashop-gke/php-nginx/7.3-fpm-alpine/config/php-fpm/php-fpm.d/www.conf index 3da2fcfb18..4b01df70bb 100644 --- a/tutorials/prestashop-gke/php-nginx/7.3-fpm-alpine/config/php-fpm/php-fpm.d/www.conf +++ b/tutorials/prestashop-gke/php-nginx/7.3-fpm-alpine/config/php-fpm/php-fpm.d/www.conf @@ -70,7 +70,7 @@ listen = 127.0.0.1:9000 ; process.priority = -19 ; Set the process dumpable flag (PR_SET_DUMPABLE prctl) even if the process user -; or group is differrent than the master process user. It allows to create process +; or group is different than the master process user. It allows to create process ; core dump and ptrace the process for the pool user. ; Default Value: no ; process.dumpable = yes @@ -269,13 +269,13 @@ pm.max_spare_servers = 3 ; %d: time taken to serve the request ; it can accept the following format: ; - %{seconds}d (default) -; - %{miliseconds}d +; - %{milliseconds}d ; - %{mili}d ; - %{microseconds}d ; - %{micro}d ; %e: an environment variable (same as $_ENV or $_SERVER) ; it must be associated with embraces to specify the name of the env -; variable. Some exemples: +; variable. Some examples: ; - server specifics like: %{REQUEST_METHOD}e or %{SERVER_PROTOCOL}e ; - HTTP headers like: %{HTTP_HOST}e or %{HTTP_USER_AGENT}e ; %f: script filename @@ -374,7 +374,7 @@ pm.max_spare_servers = 3 ; Redirect worker stdout and stderr into main error log. If not set, stdout and ; stderr will be redirected to /dev/null according to FastCGI specs. -; Note: on highloaded environement, this can cause some delay in the page +; Note: on highloaded environment, this can cause some delay in the page ; process time (several ms). ; Default Value: no ;catch_workers_output = yes diff --git a/tutorials/private-forseti-with-scc-integration/index.md b/tutorials/private-forseti-with-scc-integration/index.md index 0e1bcb2a2e..6864b609aa 100644 --- a/tutorials/private-forseti-with-scc-integration/index.md +++ b/tutorials/private-forseti-with-scc-integration/index.md @@ -296,7 +296,7 @@ You must be logged in with the super admin account for the steps in this section For details of domain-wide delegation, see [Enable domain-wide delegation in G Suite](https://forsetisecurity.org/docs/latest/configure/inventory/gsuite.html) in -the Forseti documentaton. +the Forseti documentation. 1. Navigate to [**IAM & admin > Service Account** page](https://console.cloud.google.com/iam-admin/serviceaccounts) on the `forseti` project. 1. Find the `Forseti Server` service account, click the more icon (three dots), and then click **Edit**. Note the service @@ -502,7 +502,7 @@ page: ## Conclusion -This gives you a production-ready base intallation of Forseti. However, it's important to note that you still need to create +This gives you a production-ready base installation of Forseti. However, it's important to note that you still need to create an organizaton-specific configuration. Typically, you need to refine the base rules to remove the noise and catch use-cases that are specific to your organization (for example, allow firewall rules opening SSH and RDP traffic only for your defined IP ranges). diff --git a/tutorials/secrets-manager-python/py-secrets-manager/currencyapp/requirements.txt b/tutorials/secrets-manager-python/py-secrets-manager/currencyapp/requirements.txt index e55e7b7431..14ada94e14 100644 --- a/tutorials/secrets-manager-python/py-secrets-manager/currencyapp/requirements.txt +++ b/tutorials/secrets-manager-python/py-secrets-manager/currencyapp/requirements.txt @@ -1,6 +1,6 @@ pylint==2.6.0 google-cloud==0.34.0 -Flask==1.1.2 +Flask==2.3.2 google-cloud-secret-manager==2.1.0 alpha-vantage==2.3.1 pandas==1.2.0 diff --git a/tutorials/securing-gcs-static-website/flask_login/requirements.txt b/tutorials/securing-gcs-static-website/flask_login/requirements.txt index 8345069d88..2ed4f55b08 100644 --- a/tutorials/securing-gcs-static-website/flask_login/requirements.txt +++ b/tutorials/securing-gcs-static-website/flask_login/requirements.txt @@ -1,5 +1,5 @@ click==7.1.2 -Flask==1.1.2 +Flask==2.3.2 itsdangerous==1.1.0 Jinja2==2.11.3 MarkupSafe==1.1.1 diff --git a/tutorials/securing-gcs-static-website/index.md b/tutorials/securing-gcs-static-website/index.md index 084df61ac0..2fa1c3ca40 100644 --- a/tutorials/securing-gcs-static-website/index.md +++ b/tutorials/securing-gcs-static-website/index.md @@ -155,6 +155,16 @@ Set environment variables that you use throughout the tutorial: This is a demonstration app using Vue.js. You can ignore any warnings from `npm`. + If you get the following error, see the suggestions in [this thread](https://github.com/GoogleCloudPlatform/community/issues/2364) + for possible solutions: + + ``` + library: 'digital envelope routines', + reason: 'unsupported', + code: 'ERR_OSSL_EVP_UNSUPPORTED' + ``` + + 1. Create a bucket: gsutil mb -b on gs://$BUCKET_NAME @@ -165,7 +175,7 @@ Set environment variables that you use throughout the tutorial: gsutil rsync -R dist/ gs://$BUCKET_NAME - For infomation on the `gsutil rsync` command, see [the documentation](https://cloud.google.com/storage/docs/gsutil/commands/rsync). + For information on the `gsutil rsync` command, see [the documentation](https://cloud.google.com/storage/docs/gsutil/commands/rsync). 1. Set the `MainPageSuffix` property with the `-m` flag and the `NotFoundPage` with the `-e` flag: diff --git a/tutorials/serverless-grafana-with-iap/code/main.tf b/tutorials/serverless-grafana-with-iap/code/main.tf index 2b58243d50..306fb98830 100644 --- a/tutorials/serverless-grafana-with-iap/code/main.tf +++ b/tutorials/serverless-grafana-with-iap/code/main.tf @@ -148,6 +148,7 @@ resource "google_cloud_run_service" "default" { } resource "google_cloud_run_service_iam_member" "allowAllUsers" { + project = data.google_project.project.project_id service = google_cloud_run_service.default.name location = google_cloud_run_service.default.location role = "roles/run.invoker" @@ -170,10 +171,6 @@ locals { GF_AUTH_JWT_JWK_SET_URL = "https://www.gstatic.com/iap/verify/public_key-jwk" GF_AUTH_JWT_EXPECTED_CLAIMS = "{\"iss\": \"https://cloud.google.com/iap\"}" GF_AUTH_JWT_AUTO_SIGN_UP = "true" - GF_AUTH_PROXY_ENABLED = "true" - GF_AUTH_PROXY_HEADER_NAME = "X-Goog-Authenticated-User-Email" - GF_AUTH_PROXY_HEADER_PROPERTY = "email" - GF_AUTH_PROXY_AUTO_SIGN_UP = "true" GF_USERS_AUTO_ASSIGN_ORG_ROLE = "Viewer" GF_USERS_VIEWERS_CAN_EDIT = "true" GF_USERS_EDITORS_CAN_ADMIN = "false" diff --git a/tutorials/serverless-grafana-with-iap/index.md b/tutorials/serverless-grafana-with-iap/index.md index fc0daede99..6ccfebf989 100644 --- a/tutorials/serverless-grafana-with-iap/index.md +++ b/tutorials/serverless-grafana-with-iap/index.md @@ -216,3 +216,12 @@ To delete the project, do the following: 1. In the Cloud Console, go to the [Projects page](https://console.cloud.google.com/iam-admin/projects). 1. In the project list, select the project you want to delete and click **Delete**. 1. In the dialog, type the project ID, and then click **Shut down** to delete the project. + +## Update regarding changes due to Grafana 9.2 + +There have been configuration behavior changes in Grafana 9.2 that affect this tutorial. For details, see +[the discussion in this GitHub issue](https://github.com/GoogleCloudPlatform/community/pull/2288#issuecomment-1469728639). +Because we recommend that you verify the proper signing of the token, we removed the proxy config option. +This could mean that if you are updating from the previous configuration to the updated configuration in +this tutorial, the provider of the users in your `user_auth` will change, and permissions and roles might +not be carried over. diff --git a/tutorials/serverless-static-ip/cloud-run/requirements.txt b/tutorials/serverless-static-ip/cloud-run/requirements.txt index 9b4f777532..97c4b121a9 100644 --- a/tutorials/serverless-static-ip/cloud-run/requirements.txt +++ b/tutorials/serverless-static-ip/cloud-run/requirements.txt @@ -1,3 +1,3 @@ -Flask==1.1.2 -requests==2.25.1 +Flask==2.3.2 +requests==2.31.0 gunicorn==20.0.4 \ No newline at end of file diff --git a/tutorials/speech2srt/speech2srt.py b/tutorials/speech2srt/speech2srt.py index 6aac69572a..e2e2ecfea9 100644 --- a/tutorials/speech2srt/speech2srt.py +++ b/tutorials/speech2srt/speech2srt.py @@ -89,7 +89,7 @@ def break_sentences(args, subs, alternative): def write_srt(args, subs): srt_file = args.out_file + ".srt" print("Writing {} subtitles to: {}".format(args.language_code, srt_file)) - f = open(srt_file, 'w') + f = open(srt_file, 'w', encoding="utf-8") f.writelines(srt.compose(subs)) f.close() return @@ -98,7 +98,7 @@ def write_srt(args, subs): def write_txt(args, subs): txt_file = args.out_file + ".txt" print("Writing text to: {}".format(txt_file)) - f = open(txt_file, 'w') + f = open(txt_file, 'w', encoding="utf-8") for s in subs: f.write(s.content.strip() + "\n") f.close() diff --git a/tutorials/spinnaker-binary-auth/index.md b/tutorials/spinnaker-binary-auth/index.md index 078007f186..171ac83d29 100644 --- a/tutorials/spinnaker-binary-auth/index.md +++ b/tutorials/spinnaker-binary-auth/index.md @@ -422,7 +422,7 @@ In this section, you create a trigger that starts the continuous delivery pipeli 1. Go to the [**Triggers** page](https://console.cloud.google.com/cloud-build/triggers) in the Cloud Console. 1. Click **Run trigger**. -After the build is complete and successful, go back to Spinnaker and check whether it shows that the pipeline has exectuted. There should be an execution, +After the build is complete and successful, go back to Spinnaker and check whether it shows that the pipeline has executed. There should be an execution, showing that the trigger is working, as in the following screenshot: ![First execution](https://storage.googleapis.com/gcp-community/tutorials/spinnaker-binary-auth/06-first-run.png) diff --git a/tutorials/sql-server-ao-single-subnet/index.md b/tutorials/sql-server-ao-single-subnet/index.md index e04cd9043f..cfece1aebf 100644 --- a/tutorials/sql-server-ao-single-subnet/index.md +++ b/tutorials/sql-server-ao-single-subnet/index.md @@ -1,6 +1,6 @@ --- title: Deploy a Microsoft SQL Server Always On availability group in a single subnet -description: Learn how to deploy a Microsoft SQL Server Always On availabilty group in a single subnet. +description: Learn how to deploy a Microsoft SQL Server Always On availability group in a single subnet. author: shashank-google tags: databases, MSSQL, AOAG, AG date_published: 2020-12-10 @@ -54,9 +54,9 @@ and [SQL Server multi-subnet Always On availability groups](https://cloud.google ## Create and configure a Windows domain controller -In this tutorial, you use an exisiting default VPC network. +In this tutorial, you use an existing default VPC network. -An Active Directory domain is used for domain name services and Windows Failover Clustering, which is used by Always On availabilty groups. +An Active Directory domain is used for domain name services and Windows Failover Clustering, which is used by Always On availability groups. Having the AD domain controller in the same VPC network is not a requirement, but is a simplification for the purpose of this tutorial. @@ -307,7 +307,7 @@ an existing database for the availability group. 1. Create the availability group listener: osql -S node-1 -E -Q "USE [master] ALTER AVAILABILITY GROUP [sql-ag] - ADD LISTENER N'sql-listner' (WITH IP ((N'10.128.0.20', N'255.255.252.0')) , PORT=1433);" + ADD LISTENER N'sql-listener' (WITH IP ((N'10.128.0.20', N'255.255.252.0')) , PORT=1433);" The listener must be created with an unused IP address before creating the internal load balancer. Later, the same IP address is allocated to the internal load balancer. If SQL Server detects that the IP address is already in use, then this command to create the listener fails. diff --git a/tutorials/telepresence-and-gke/index.md b/tutorials/telepresence-and-gke/index.md index 08bb4d9040..0eeebe2cf2 100644 --- a/tutorials/telepresence-and-gke/index.md +++ b/tutorials/telepresence-and-gke/index.md @@ -26,7 +26,7 @@ One of the more common cloud-native development workflows looks like this: This workflow seems to work for large, infrequent changes; but for small, fast changes it introduces a lot of wait time. You should be able to see the results of your changes immediately. -In this tutorial, you'll set up a local development environment for a Go microservice in Google Kubernete Engine (GKE). Instead of waiting through the +In this tutorial, you'll set up a local development environment for a Go microservice in Google Kubernetes Engine (GKE). Instead of waiting through the old-fashioned development workflow, you'll use [Telepresence](http://www.getambassador.io/products/telepresence/), an open source Cloud Native Computing Foundation project, to see the results of your change right away. diff --git a/tutorials/using-cloud-vpn-with-alibaba-redundancy/index.md b/tutorials/using-cloud-vpn-with-alibaba-redundancy/index.md index a1008c3fbd..334c8877f0 100644 --- a/tutorials/using-cloud-vpn-with-alibaba-redundancy/index.md +++ b/tutorials/using-cloud-vpn-with-alibaba-redundancy/index.md @@ -1,6 +1,6 @@ --- title: Using Cloud VPN with Alibaba Cloud VPN Gateway with redundancy -description: Describes how to build IPsec VPNs between Cloud VPN on Google Cloud and Alibaba Cloud VPN Gateway with redudancy. +description: Describes how to build IPsec VPNs between Cloud VPN on Google Cloud and Alibaba Cloud VPN Gateway with redundancy. author: epluscloudservices tags: VPN, interop, alibaba, alibaba cloud vpn gateway, redundancy date_published: 2018-08-31 @@ -201,7 +201,7 @@ takes about a minute for this network and its subnet to appear. #### Create the Google Cloud external IP addresses for Cloud VPN gateways -Two Cloud VPN gateways on the Google Cloud side are needed for redudancy. +Two Cloud VPN gateways on the Google Cloud side are needed for redundancy. The following procedure configures one external IP address for the first Cloud VPN gateway. diff --git a/tutorials/using-cloud-vpn-with-checkpoint/index.md b/tutorials/using-cloud-vpn-with-checkpoint/index.md index d1cb6ad857..45249175f0 100644 --- a/tutorials/using-cloud-vpn-with-checkpoint/index.md +++ b/tutorials/using-cloud-vpn-with-checkpoint/index.md @@ -35,7 +35,7 @@ configuration using the referenced device: # Before you begin -## Prerequisities +## Prerequisites To use a Check Point security gateway with Cloud VPN make sure the following prerequisites have been met: diff --git a/tutorials/using-cloud-vpn-with-strongswan/index.md b/tutorials/using-cloud-vpn-with-strongswan/index.md index f7104a8028..801044f2c6 100644 --- a/tutorials/using-cloud-vpn-with-strongswan/index.md +++ b/tutorials/using-cloud-vpn-with-strongswan/index.md @@ -423,7 +423,7 @@ interface configuration, including MTU, etc. # Enable loosy source validation, if possible. Otherwise disable validation. sysctl -w net.ipv4.conf.${VTI_IF}.rp_filter=2 || sysctl -w net.ipv4.conf.${VTI_IF}.rp_filter=0 - # If you would like to use VTI for policy-based you shoud take care of routing by yourselv, e.x. + # If you would like to use VTI for policy-based you should take care of routing by yourselv, e.x. #if [[ "${PLUTO_PEER_CLIENT}" != "0.0.0.0/0" ]]; then # ${IP} r add "${PLUTO_PEER_CLIENT}" dev "${VTI_IF}" #fi diff --git a/tutorials/using-flask-login-with-cloud-datastore/index.md b/tutorials/using-flask-login-with-cloud-datastore/index.md index fcdf7541e0..4fdaae7c99 100644 --- a/tutorials/using-flask-login-with-cloud-datastore/index.md +++ b/tutorials/using-flask-login-with-cloud-datastore/index.md @@ -72,15 +72,15 @@ Point the environment variable `GOOGLE_APPLICATION_CREDENTIALS` to the location * Linux or macOS: - export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-acount-key.json" + export GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json" * Windows, with Powershell: - $env:GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-acount-key.json" + $env:GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json" * Windows, with Command Prompt: - set GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-acount-key.json" + set GOOGLE_APPLICATION_CREDENTIALS="/path/to/your/service-account-key.json" You are now ready to connect to your Firestore in Datastore mode. diff --git a/tutorials/using-ha-vpn-with-cisco-asa/index.md b/tutorials/using-ha-vpn-with-cisco-asa/index.md index 1f593e005d..bed49dab6e 100644 --- a/tutorials/using-ha-vpn-with-cisco-asa/index.md +++ b/tutorials/using-ha-vpn-with-cisco-asa/index.md @@ -407,7 +407,7 @@ topology, configure a minimum of three interfaces, named `outside-0`, `outside-1 interfaces are connected to the internet; the inside interface is connected to the private network. Enter the configuration mode to create the base Layer 3 network configuration for the Cisco system, -replacing the IP addresses based on your envrionment: +replacing the IP addresses based on your environment: configure terminal interface GigabitEthernet1/1 diff --git a/tutorials/using-ha-vpn-with-fortigate/index.md b/tutorials/using-ha-vpn-with-fortigate/index.md index ebbfa95d19..add6641172 100644 --- a/tutorials/using-ha-vpn-with-fortigate/index.md +++ b/tutorials/using-ha-vpn-with-fortigate/index.md @@ -121,6 +121,7 @@ lists the parameters and gives examples of the values used in this guide. | First BGP peer interface | `[ROUTER_INTERFACE_NAME_0]` | `bgp-peer-tunnel-a-to-on-prem-if-0` | | Second BGP peer interface | `[ROUTER_INTERFACE_NAME_1]` | `bgp-peer-tunnel-a-to-on-prem-if-1` | | BGP interface netmask length | `[MASK_LENGTH]` | `/30` | +| Dead Peer Detection | `[phase1-interface]` | `disable / on-idle / on demand` | ## Configure the Google Cloud side @@ -414,7 +415,7 @@ For the 1-peer-2-address topology, configure a minimum of three interfaces: two outside interfaces that are connected to the internet and one inside interface that is connected to the private network. -Make sure to replace the IP addresses based on your envrionment: +Make sure to replace the IP addresses based on your environment: config system interface edit port1 @@ -463,6 +464,9 @@ This configuration creates the Phase 1 proposal. Make sure to change the set remote-gw 35.242.121.143 set local-gw 209.119.81.228 set psksecret mysharedsecret + set dpd[disable | on-idle | on demand] + set dpd-retryinterval 15 + set dpd-retrycount 3 next edit GCP-HA-VPN-INT1 set interface port2 @@ -517,7 +521,7 @@ through the VPN tunnel or tunnels using the BGP routing protocol. With the configuration below, BGP peering will be enabled and all "connected" routes will be advertised to the peer. Change redistribution of routes based on your -envrionment. +environment. config router bgp set as 65002 diff --git a/tutorials/web-instrumentation/index.md b/tutorials/web-instrumentation/index.md index fc30227714..2e189c15e5 100644 --- a/tutorials/web-instrumentation/index.md +++ b/tutorials/web-instrumentation/index.md @@ -673,7 +673,7 @@ Google with many Google Cloud services pre-configured for ease of use. Follow th 1. Execute the code in the **Client Latency** block. - The resut is a chart of median client latency: + The result is a chart of median client latency: ![Median client latency from Colab sheet](https://storage.googleapis.com/gcp-community/tutorials/web-instrumentation/client_latency_median.png) diff --git a/tutorials/writing-prometheus-metrics-bigquery/index.md b/tutorials/writing-prometheus-metrics-bigquery/index.md index 92f2ad91ce..0268dda0db 100644 --- a/tutorials/writing-prometheus-metrics-bigquery/index.md +++ b/tutorials/writing-prometheus-metrics-bigquery/index.md @@ -367,7 +367,5 @@ If you don't want to delete the project, you can delete the provisioned resource ## What's next -- Learn how to - [manage Cloud Monitoring dashboards with the Cloud Monitoring API](https://cloud.google.com/solutions/managing-monitoring-dashboards-automatically-using-the-api). - Learn more about how to [export metrics from multiple projects](https://cloud.google.com/solutions/stackdriver-monitoring-metric-export). - Try out other Google Cloud features for yourself. Have a look at those [tutorials](https://cloud.google.com/docs/tutorials).