Skip to content

Conversation

maheshwarip
Copy link
Contributor

Simple docs change!

@maheshwarip maheshwarip marked this pull request as ready for review October 2, 2025 21:32
@maheshwarip maheshwarip requested a review from a team as a code owner October 2, 2025 21:32
@maheshwarip maheshwarip requested a review from martykulma October 2, 2025 21:40
Comment on lines +29 to +33
CREATE SECRET secret_access_key AS '<SECRET_ACCESS_KEY>';
CREATE CONNECTION bucket_connection TO AWS (
ACCESS KEY ID = '<ACCESS_KEY_ID>',
SECRET ACCESS KEY = SECRET secret_access_key
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should inform the user they need to set the appropriate endpoint and region for the service. Using GCS as an example in the US region, they would also set the following in the connection:

ENDPOINT = 'https://storage.googleapis.com',
REGION = 'us'

### S3 compatible object storage
You can use an AWS connection to perform bulk exports to any S3 compatible object storage service,
such as Google Cloud Storage. While connecting to S3 compatible object storage, you need to provide
static access key credentials.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also would mention the endpoint/region here.

main:
parent: sink
name: "S3 Compatible Object Storage"
weight: 10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just wondering ... when we sink to Snowflake via S3 ...can that be done via S3-compatible? If so, maybe a note in the setup s3 section stating that this can be done via s3-compatible and a link to the page?https://preview.materialize.com/materialize/33752/serve-results/sink/snowflake/ also mention that

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, for the various sink pages like the concepts and Sink results ... the content hasn't changed since whenever the content was written other than some minor reorg. Should we incorporate that sink is available via COPY TO in this PR or handle it in a separate PR at a later date?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, for my second comment, I'll handle that in a separate PR. I want to link to some links related to subscription based sinks ... so, I can also incorporate copy to based sinks as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That change is #33792


## Before you begin:
- Make sure that you have setup your bucket
- Obtain the S3 compatible URI for your bucket, as well as S3 access tokens (`ACCESS_KEY_ID` and `SECRET_ACCESS_KEY`). Instructions to obtain these vary by provider.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To make it more scannable and easier for people to CTRL-F later on when we mention S3_BUCKET_URI -- maybe, something like:

- Obtain the following for your bucket:
  - The S3 compatible URI  (`S3_BUCKET_URI`).
  - The S3 access tokens  (`ACCESS_KEY_ID` and `SECRET_ACCESS_KEY`).
  Refer to your provider for instructions.

## Step 2. Run a bulk export

To export data to your target bucket, use the [`COPY TO`](/sql/copy-to/#copy-to-s3)
command, and the AWS connection you created in the previous step. Replace the '<S3_BUCKET_URI>'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'<S3_BUCKET_URI>' -> `'<S3_BUCKET_URI>'`

## Step 2. Run a bulk export

To export data to your target bucket, use the [`COPY TO`](/sql/copy-to/#copy-to-s3)
command, and the AWS connection you created in the previous step. Replace the '<S3_BUCKET_URI>'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no comma after "command"

Cloud Storage, or Cloudflare R2.

## Before you begin:
- Make sure that you have setup your bucket
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a period.

@@ -0,0 +1,88 @@
---
title: "S3 Compatible Object Storage"
description: "How to export results from Materialize to S3 compatible object storage"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh ... we should update
https://preview.materialize.com/materialize/33752/sql/copy-to/#copy-to-s3 to specify s3 and s3-compatible
as well as remove the preview?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants