Skip to content

Commit

Permalink
Merge branch 'GoogleCloudPlatform:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
samkenxstream committed Jun 6, 2023
2 parents d73d330 + 7524785 commit 5e8a775
Show file tree
Hide file tree
Showing 92 changed files with 294 additions and 1,028 deletions.
4 changes: 2 additions & 2 deletions archived/appengine-effective-polymodel.md
Original file line number Diff line number Diff line change
Expand Up @@ -311,7 +311,7 @@ maintain their specific structure.
It might help to understand a little bit about how this polymorphism is
implemented. All sub-classes of a given class hierarchy root share the same
Google Cloud Datastore kind. To differentiate between classes within the
hiearchy, the PolyModel has an extra hidden string list property, class, in the
hierarchy, the PolyModel has an extra hidden string list property, class, in the
Cloud Datastore. This list, known as the class key, describes that particular
object's location in the class hierarchy. Each element of this list is the name
of a class, starting with the root of the hierarchy at index 0. Because queries
Expand Down Expand Up @@ -348,7 +348,7 @@ It might be tempting to make every single class in an application a PolyModel
class, even for classes that do not immediately require a subclass. However it
should not normally be required to create a PolyModel class earlier so that it
might be subclassed in the future. If the application sticks to using the class
method version of gql and all it is future compatible to change the inheritence
method version of gql and all it is future compatible to change the inheritance
from Model to PolyModel later. This is because calls to gql and all on the
class hierarchy root class do not attempt to query against class property.

Expand Down
2 changes: 1 addition & 1 deletion archived/appengine-memcache-best-practices/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ programming languages. You can share the data in your memcache between any of
your app's modules and versions. Because the memcache API serializes its
parameters, and the API may be implemented differently in different languages,
you need to code memcache keys and values carefully if you intend to share them
between langauges.
between languages.

### Key Compatibility

Expand Down
2 changes: 1 addition & 1 deletion archived/appengine-pusher/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Custom event handlers can be attached to a given event type.
This allows for efficient event routing in the clients.
**Note**: A subscriber will receive all messages published over a channel.

Events may be trigged by the user or Pusher.
Events may be triggered by the user or Pusher.
In case of Pusher-triggered events on a channel, the event name is
prefixed with `pusher:`, such as `pusher:subscription-succeeded`.

Expand Down
2 changes: 1 addition & 1 deletion archived/cloud-iot-fledge/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -261,7 +261,7 @@ used to authenticate the device.

## Verify communication

1. Retun to the Fledge GUI dashboard.
1. Return to the Fledge GUI dashboard.

The count of readings sent and received readings should be increasing.

Expand Down
4 changes: 2 additions & 2 deletions archived/cloud-iot-mender-ota/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -234,14 +234,14 @@ Using the Cloud Shell environment, you will configure IoT Core audit logs to rou

1. Create a log export for IoT Core device creation events to Pub/Sub:

gcloud beta logging sinks create device-lifecyle \
gcloud beta logging sinks create device-lifecycle \
pubsub.googleapis.com/projects/$PROJECT/topics/registration-events \
--log-filter='resource.type="cloudiot_device" protoPayload.methodName="google.cloud.iot.v1.DeviceManager.CreateDevice"'

1. Give the log exporter system-account permission to publish to your topic:

gcloud beta pubsub topics add-iam-policy-binding registration-events \
--member $(gcloud beta logging sinks describe device-lifecyle --format='value(writerIdentity)') \
--member $(gcloud beta logging sinks describe device-lifecycle --format='value(writerIdentity)') \
--role roles/pubsub.publisher

### Deploy Firebase Functions to call Mender Preauthorization API
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
pylint==2.4.0
google-cloud==0.34.0
Flask==1.1.1
Flask==2.3.2
kafka-python==1.4.6
pykafka==2.8.0
confluent-kafka==1.1.0
Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ the one you want to monitor.

### Initialization and authentication using gapi

Once the user opens the page, Angular's `ng-init` embeded in the
Once the user opens the page, Angular's `ng-init` embedded in the
[`body` element of *index.html*][index] runs our `initialize()` function from
[*main-controller.js*][main-controller].

Expand Down
2 changes: 1 addition & 1 deletion archived/data-science-exploration/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -257,7 +257,7 @@ Quantiles are useful for getting a quick feel for the distribution of your data
Finally, one might be concerned, when researching a new place to live or
establish a business, the rate you might expect meteors to land in your area.
Fortunately, BigQuery provides some functions to help compute distances between
latitude and logitude coordinates. Adapted from the
latitude and longitude coordinates. Adapted from the
[advanced examples](/bigquery/docs/reference/legacy-sql#math-adv-examples) in
the docs, we can find the number of meteors within an approximately 50-mile
radius of Google's Kirkland campus (at 47.669861, -122.197355):
Expand Down
2 changes: 1 addition & 1 deletion archived/data-science-extraction/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@ Because the audio we're transcribing is longer than a minute in length, we must
first upload the raw audio files to [Cloud Storage][storage], so the Speech API
can access it asynchronously. We could use the
[gsutil][gsutil] tool to do this manually, or we could
do it programatically from our code. Because we'd like to eventually
do it programmatically from our code. Because we'd like to eventually
[automate this process in a pipeline](/community/tutorials/data-science-preprocessing/),
we'll do this in code:

Expand Down
2 changes: 1 addition & 1 deletion archived/deploy-xenforo-to-compute-engine/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,7 +258,7 @@ tutorial:

| Before | After |
| --------------------------|---------------------------|
| default_transport = error | #default_transpot = error |
| default_transport = error | #default_transport = error |
| relay_transport = error | #relay_transport = error |

Edit the following lines:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ your computer but sends the actual workload to a cluster on Google Kubernetes En
the following:

- You can continue to use your laptop/workstation for other work while waiting for the results.
- You can use more powerful machines to speed up the search, for instance mulitple nodes with 64 virtual CPU cores.
- You can use more powerful machines to speed up the search, for instance multiple nodes with 64 virtual CPU cores.

To accomplish this, we will create a `SearchCV` object in the notebook, upload a pickled copy of this object to Cloud
Storage. A job running on a cluster which we will create then retrieves that pickled object and calls its `fit` method and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ This tutorial discusses three different methods.

## Easiest: Do nothing

When you deploy to App Engine flexible enviroment a Docker image is created for
When you deploy to App Engine flexible environment a Docker image is created for
you and your code is copied into the image. This first method relies on the
Docker image build step to make Bower dependencies available to your app. This
method is the easiest.
Expand Down
2 changes: 1 addition & 1 deletion archived/java-dataflow-quickstart/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ In this walkthrough you’ll do the following:

* Set up Dataflow.
* Enable the necessary Google Cloud APIs.
* Create a pipleine.
* Create a pipeline.
* Publish the pipeline to Dataflow.

[![Open walkthrough in the Cloud Console](https://storage.googleapis.com/gcp-community/tutorials/java-dataflow-quickstart/tutorial.png)](https://console.cloud.google.com/?walkthrough_id=dataflow__quickstart-beam__quickstart-beam-java)
2 changes: 1 addition & 1 deletion archived/kotlin-springboot-container-engine.md
Original file line number Diff line number Diff line change
Expand Up @@ -315,7 +315,7 @@ building a new image and pointing your deployment to it.
kubectl set image deployment/demo demo=gcr.io/${PROJECT_ID}/demo:v1

**Note:** If a deployment gets stuck because an error in the image prevents
it from starting successfuly, you can recover by undoing the rollout. See the
it from starting successfully, you can recover by undoing the rollout. See the
[Kubernetes deployment documentation](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/)
for more info.

Expand Down
2 changes: 1 addition & 1 deletion archived/run-symfony-on-appengine-standard/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -267,7 +267,7 @@ benefit from all of the features of a real email broadcasting system.

composer require symfony/mailer

1. Specify which mail sytem to use:
1. Specify which mail system to use:

composer require symfony/mailgun-mailer
Expand Down

This file was deleted.

2 changes: 1 addition & 1 deletion archived/singularity-containers-with-cloud-build/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Vanessa Sochat | Stanford
<p style="background-color:#CAFACA;"><i>Contributed by Google employees.</i></p>

This tutorial shows you how to use [Cloud Build](https://cloud.google.com/cloud-build/) to build [Singularity](https://www.sylabs.io/singularity/) containers.
In constrast to [Docker](https://www.docker.com/), the Singularity container binary is designed specifically for high performance computing (HPC) workloads.
In contrast to [Docker](https://www.docker.com/), the Singularity container binary is designed specifically for high performance computing (HPC) workloads.

## Before you begin

Expand Down
2 changes: 1 addition & 1 deletion archived/terraform-asm-in-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ For more information about metrics, logs, and tracing with Anthos Service Mesh,

### Terraform destroy

Use the `terraform destory` command to destroy all Terraform resources:
Use the `terraform destroy` command to destroy all Terraform resources:

${TERRAFORM_CMD} destroy -auto-approve

Expand Down
2 changes: 1 addition & 1 deletion archived/terraform-asm-upgrade.md
Original file line number Diff line number Diff line change
Expand Up @@ -527,7 +527,7 @@ you can't roll back.

### Terraform destroy

Use the `terraform destory` command to destroy all Terraform resources:
Use the `terraform destroy` command to destroy all Terraform resources:

cd ${WORKDIR}
terraform destroy -auto-approve
Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
title: Understanding OAuth2 and deploying a basic authorization service to Cloud Functions
description: Learn how to deploy a basic OAuth2 authorization serivce to Cloud Functions.
description: Learn how to deploy a basic OAuth2 authorization service to Cloud Functions.
author: michaelawyu
tags: OAuth 2.0, Node.js, Cloud Functions, Cloud Datastore
date_published: 2018-06-15
Expand Down
2 changes: 1 addition & 1 deletion tutorials/bigquery-from-excel/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ for details about on-demand and flat-rate pricing. BigQuery also offers

1. Check whether your version of Excel is
[32-bit or 64-bit](https://www.digitalcitizen.life/3-ways-learn-whether-windows-program-64-bit-or-32-bit).
1. Download the latest version of thevODBC driver from the
1. Download the latest version of the ODBC driver from the
[Simba Drivers for BigQuery page](https://cloud.google.com/bigquery/partners/simba-drivers/) that
matches your version of Excel.
1. Run the ODBC driver installer.
Expand Down
2 changes: 1 addition & 1 deletion tutorials/cicd-cloud-run-github-actions/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ of the sample code and its Dockerfile.
* Write a unit test for your code.
* Create a Dockerfile.
* Create a GitHub Action workflow file to deploy your code on Cloud Run.
* Make the code acessible for anyone.
* Make the code accessible for anyone.

## Costs

Expand Down
2 changes: 1 addition & 1 deletion tutorials/cloud-functions-avro-import-bq/index.js
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ exports.ToBigQuery_Stage = (event, callback) => {
// Do not use the ftp_files Bucket to ensure that the bucket does not get crowded.
// Change bucket to gas_ddr_files_staging
// Set the table name (TableId) to the full file name including date,
// this will give each table a new distinct name and we can keep a record of all of the files recieved.
// this will give each table a new distinct name and we can keep a record of all of the files received.
// This may not be the best way to do this... at some point we will need to archive and delete prior records.
const dashOffset = filename.indexOf('-');
const tableId = filename.substring(0, dashOffset) + '_STAGE';
Expand Down
2 changes: 1 addition & 1 deletion tutorials/cloud-functions-rate-limiting/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ The `gcloud` command does the following (with each line below corresponding to a
- triggered by HTTP requests,
- from the Typescript transpiled JavaScript source code;
- sets a runtime environment variable to the Redis service IP address,
- connected to the VPC netowrk,
- connected to the VPC network,
- in the target region.

This function uses a Redis-backed [rate-limiting library](https://www.npmjs.com/package/redis-rate-limiter) for Node.js.
Expand Down
4 changes: 2 additions & 2 deletions tutorials/cloud-run-golang-gcs-proxy/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -383,7 +383,7 @@ Here is are some options for approaches that you could take to do this:
you automatically get the improvements. This could be slow and expensive if you make a translation each time, but you can
add some caching or CDN, so that the translation is only made on cache fills.

This dyanmic server-side approach is the one that is described in this section.
This dynamic server-side approach is the one that is described in this section.

Change the `config.go` contents to the following:

Expand All @@ -394,7 +394,7 @@ func GET(ctx context.Context, output http.ResponseWriter, input *http.Request) {
}
```

`DynamicTranslationFromEnToEs` is a pipeline included in the sample confguration:
`DynamicTranslationFromEnToEs` is a pipeline included in the sample configuration:

```go
// EXAMPLE: Translate HTML files from English to Spanish dynamically.
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading

0 comments on commit 5e8a775

Please sign in to comment.