-
Notifications
You must be signed in to change notification settings - Fork 480
Docs update to document copy to
for any S3 compatible storage
#33752
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
CREATE SECRET secret_access_key AS '<SECRET_ACCESS_KEY>'; | ||
CREATE CONNECTION bucket_connection TO AWS ( | ||
ACCESS KEY ID = '<ACCESS_KEY_ID>', | ||
SECRET ACCESS KEY = SECRET secret_access_key | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should inform the user they need to set the appropriate endpoint and region for the service. Using GCS as an example in the US region, they would also set the following in the connection:
ENDPOINT = 'https://storage.googleapis.com',
REGION = 'us'
### S3 compatible object storage | ||
You can use an AWS connection to perform bulk exports to any S3 compatible object storage service, | ||
such as Google Cloud Storage. While connecting to S3 compatible object storage, you need to provide | ||
static access key credentials. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also would mention the endpoint/region here.
main: | ||
parent: sink | ||
name: "S3 Compatible Object Storage" | ||
weight: 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering ... when we sink to Snowflake via S3 ...can that be done via S3-compatible? If so, maybe a note in the setup s3 section stating that this can be done via s3-compatible and a link to the page?https://preview.materialize.com/materialize/33752/serve-results/sink/snowflake/ also mention that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, for the various sink pages like the concepts and Sink results ... the content hasn't changed since whenever the content was written other than some minor reorg. Should we incorporate that sink is available via COPY TO
in this PR or handle it in a separate PR at a later date?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, for my second comment, I'll handle that in a separate PR. I want to link to some links related to subscription based sinks ... so, I can also incorporate copy to based sinks as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That change is #33792
|
||
## Before you begin: | ||
- Make sure that you have setup your bucket | ||
- Obtain the S3 compatible URI for your bucket, as well as S3 access tokens (`ACCESS_KEY_ID` and `SECRET_ACCESS_KEY`). Instructions to obtain these vary by provider. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To make it more scannable and easier for people to CTRL-F later on when we mention S3_BUCKET_URI
-- maybe, something like:
- Obtain the following for your bucket:
- The S3 compatible URI (`S3_BUCKET_URI`).
- The S3 access tokens (`ACCESS_KEY_ID` and `SECRET_ACCESS_KEY`).
Refer to your provider for instructions.
## Step 2. Run a bulk export | ||
|
||
To export data to your target bucket, use the [`COPY TO`](/sql/copy-to/#copy-to-s3) | ||
command, and the AWS connection you created in the previous step. Replace the '<S3_BUCKET_URI>' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'<S3_BUCKET_URI>' -> `'<S3_BUCKET_URI>'`
## Step 2. Run a bulk export | ||
|
||
To export data to your target bucket, use the [`COPY TO`](/sql/copy-to/#copy-to-s3) | ||
command, and the AWS connection you created in the previous step. Replace the '<S3_BUCKET_URI>' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
no comma after "command"
Cloud Storage, or Cloudflare R2. | ||
|
||
## Before you begin: | ||
- Make sure that you have setup your bucket |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a period.
@@ -0,0 +1,88 @@ | |||
--- | |||
title: "S3 Compatible Object Storage" | |||
description: "How to export results from Materialize to S3 compatible object storage" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh ... we should update
https://preview.materialize.com/materialize/33752/sql/copy-to/#copy-to-s3 to specify s3 and s3-compatible
as well as remove the preview?
Simple docs change!