Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
TristenHarr committed Aug 22, 2024
1 parent 9eeba8e commit 580d059
Show file tree
Hide file tree
Showing 7 changed files with 45 additions and 116 deletions.
3 changes: 3 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# DuckDB Connector Changelog
This changelog documents changes between release tags.

## [0.1.0] - 2024-08-22
* Update Documentation for ndc-hub

## [0.0.22] - 2024-08-13
* Update workflow to publish to registry

Expand Down
120 changes: 29 additions & 91 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ The Hasura DuckDB Connector allows for connecting to a DuckDB database or a Moth

This connector is built using the [Typescript Data Connector SDK](https://github.com/hasura/ndc-sdk-typescript) and implements the [Data Connector Spec](https://github.com/hasura/ndc-spec).

* [Connector information in the Hasura Hub](https://hasura.io/connectors/duckdb)
* [Hasura V3 Documentation](https://hasura.io/docs/3.0/index/)
- [See the listing in the Hasura Hub](https://hasura.io/connectors/duckdb)
- [Hasura V3 Documentation](https://hasura.io/docs/3.0/index/)

## Features

Expand All @@ -34,8 +34,6 @@ Below, you'll find a matrix of all supported features for the DuckDB connector:

## Before you get Started

[Prerequisites or recommended steps before using the connector.]

1. The [DDN CLI](https://hasura.io/docs/3.0/cli/installation) and [Docker](https://docs.docker.com/engine/install/) installed
2. A [supergraph](https://hasura.io/docs/3.0/getting-started/init-supergraph)
3. A [subgraph](https://hasura.io/docs/3.0/getting-started/init-subgraph)
Expand All @@ -52,118 +50,58 @@ connector — after it's been configured — [here](https://hasura.io/docs/3.0/g
ddn auth login
```

### Step 2: Initialize the connector

```bash
ddn connector init duckdb --subgraph my_subgraph --hub-connector hasura/duckdb
```

In the snippet above, we've used the subgraph `my_subgraph` as an example; however, you should change this
value to match any subgraph which you've created in your project.

### Step 3: Modify the connector's port

When you initialized your connector, the CLI generated a set of configuration files, including a Docker Compose file for
the connector. Typically, connectors default to port `8080`. Each time you add a connector, we recommend incrementing the published port by one to avoid port collisions.

As an example, if your connector's configuration is in `my_subgraph/connector/duckdb/docker-compose.duckdb.yaml`, you can modify the published port to
reflect a value that isn't currently being used by any other connectors:

```yaml
ports:
- mode: ingress
target: 8080
published: "8082"
protocol: tcp
```
### Step 4: Add environment variables
Now that our connector has been scaffolded out for us, we need to provide a connection string so that the data source can be introspected and the
boilerplate configuration can be taken care of by the CLI.
The CLI has provided an `.env.local` file for our connector in the `my_subgraph/connector/duckdb` directory. We can add a key-value pair
of `DUCKDB_URL` along with the connection string itself to this file, and our connector will use this to connect to our database.


The file, after adding the `DUCKDB_URL`, should look like this example if connecting to a MotherDuck hosted DuckDB instance:

```env
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://local.hasura.dev:4317
OTEL_SERVICE_NAME=my_subgraph_duckdb
DUCKDB_URL=md:?motherduck_token=eyJhbGc...
```

To connect to a local DuckDB file, you can add the persistent DuckDB database file into the `my_subgraph/connector/duckdb` directory, and since all files in this directory will get mounted to the container at `/etc/connector/` you can then point the `DUCKDB_URL` to the local file. Assuming that the duckdb file was named `chinook.db` the file should look like this example:

```env
OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=http://local.hasura.dev:4317
OTEL_SERVICE_NAME=my_subgraph_duckdb
DUCKDB_URL=/etc/connector/chinook.db
```

### Step 5: Introspect your data source
### Step 2: Configure the connector

With the connector configured, we can now use the CLI to introspect our database and create a source-specific configuration file for our connector.
Once you have an initialized supergraph and subgraph, run the initialization command in interactive mode while providing a name for the connector in the prompt:

```bash
ddn connector introspect --connector my_subgraph/connector/duckdb/connector.yaml
ddn connector init duckdb -i
```

### Step 6. Create the Hasura metadata
#### Step 2.1: Choose the `hasura/duckdb` option from the list

Hasura DDN uses a concept called "connector linking" to take [NDC-compliant](https://github.com/hasura/ndc-spec)
configuration JSON files for a data connector and transform them into an `hml` (Hasura Metadata Language) file as a
[`DataConnectorLink` metadata object](https://hasura.io/docs/3.0/supergraph-modeling/data-connectors#dataconnectorlink-dataconnectorlink).
#### Step 2.2: Choose a port for the connector

Basically, metadata objects in `hml` files define our API.
The CLI will ask for a specific port to run the connector on. Choose a port that is not already in use or use the default suggested port.

First we need to create this `hml` file with the `connector-link add` command and then convert our configuration files
into `hml` syntax and add it to this file with the `connector-link update` command.
#### Step 2.3: Provide the env var(s) for the connector

Let's name the `hml` file the same as our connector, `duckdb`:
| Name | Description |
|-|-|
| DUCKDB_URL | The connection string for the DuckDB database, or the file path to the DuckDB database file |

```bash
ddn connector-link add duckdb --subgraph my_subgraph
```

The new file is scaffolded out at `my_subgraph/metadata/duckdb/duckdb.hml`.

### Step 7. Update the environment variables
You'll find the environment variables in the `.env` file and they will be in the format:

The generated file has two environment variables — one for reads and one for writes — that you'll need to add to your subgraph's `.env.my_subgraph` file.
Each key is prefixed by the subgraph name, an underscore, and the name of the connector. Ensure the port value matches what is published in your connector's docker compose file.
`<SUBGRAPH_NAME>_<CONNECTOR_NAME>_<VARIABLE_NAME>`

As an example:
Here is an example of what your `.env` file might look like:

```env
MY_SUBGRAPH_DUCKDB_READ_URL=http://local.hasura.dev:<port>
MY_SUBGRAPH_DUCKDB_WRITE_URL=http://local.hasura.dev:<port>
```
APP_DUCKDB_AUTHORIZATION_HEADER="Bearer SPHZWfL7P3Jdc9mDMF9ZNA=="
APP_DUCKDB_DUCKDB_URL="md:?motherduck_token=ey..."
APP_DUCKDB_HASURA_SERVICE_TOKEN_SECRET="SPHZWfL7P3Jdc9mDMF9ZNA=="
APP_DUCKDB_OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="http://local.hasura.dev:4317"
APP_DUCKDB_OTEL_SERVICE_NAME="app_duckdb"
APP_DUCKDB_READ_URL="http://local.hasura.dev:7525"
APP_DUCKDB_WRITE_URL="http://local.hasura.dev:7525"
```

These values are for the connector itself and utilize `local.hasura.dev` to ensure proper resolution within the docker container.

### Step 8. Start the connector's Docker Compose
### Step 3: Introspect the connector

Let's start our connector's Docker Compose file by running the following from inside the connector's subgraph:
Introspecting the connector will generate a `config.json` file and a `duckdb.hml` file.

```bash
docker compose -f docker-compose.duckdb.yaml up
ddn connector introspect duckdb
```

### Step 9. Update the new `DataConnectorLink` object
### Step 4: Add your resources

Finally, now that our `DataConnectorLink` has the correct environment variables configured for the connector,
we can run the update command to have the CLI look at the configuration JSON and transform it to reflect our database's
schema in `hml` format. In a new terminal tab, run:
You can add the models, commands, and relationships to your API by tracking them which generates the HML files.

```bash
ddn connector-link update duckdb --subgraph my_subgraph
ddn connector-link add-resources duckdb
```

After this command runs, you can open your `my_subgraph/metadata/duckdb.hml` file and see your metadata completely
scaffolded out for you 🎉

## Documentation

View the full documentation for the DuckDB connector [here](https://github.com/hasura/ndc-duckdb/blob/main/docs/index.md).
Expand Down
4 changes: 2 additions & 2 deletions connector-definition/connector-metadata.yaml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
packagingDefinition:
type: PrebuiltDockerImage
dockerImage: ghcr.io/hasura/ndc-duckdb:v0.0.22
dockerImage: ghcr.io/hasura/ndc-duckdb:v0.1.0
supportedEnvironmentVariables:
- name: DUCKDB_URL
description: The url for the DuckDB database
commands:
update:
type: Dockerized
dockerImage: ghcr.io/hasura/ndc-duckdb:v0.0.22
dockerImage: ghcr.io/hasura/ndc-duckdb:v0.1.0
commandArgs:
- update
dockerComposeWatch:
Expand Down
26 changes: 7 additions & 19 deletions docs/development.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,36 +34,24 @@ To start the connector on port 9094, for a MotherDuck hosted DuckDB instance run

### Attach the connector to the locally running engine

There should a file located at `my_subgraph/.env.my_subgraph` that contains
There should a file located at `.env` that contains

```env
MY_SUBGRAPH_DUCKDB_READ_URL=http://local.hasura.dev:<port>
MY_SUBGRAPH_DUCKDB_WRITE_URL=http://local.hasura.dev:<port>
APP_DUCKDB_READ_URL="http://local.hasura.dev:<port>"
APP_DUCKDB_WRITE_URL="http://local.hasura.dev:<port>"
```

Create a new .env file called `.env.my_subgraph.dev` and place the following values into it:
Edit the values in the `.env` file to point at port 9094 with the locally running connector.

```env
MY_SUBGRAPH_DUCKDB_READ_URL=http://local.hasura.dev:9094
MY_SUBGRAPH_DUCKDB_WRITE_URL=http://local.hasura.dev:9094
APP_DUCKDB_READ_URL="http://local.hasura.dev:9094"
APP_DUCKDB_WRITE_URL="http://local.hasura.dev:9094"
```

In your `supergraph.yaml` file change the env file to point to the dev file.

```
subgraphs:
my_subgraph:
generator:
rootPath: my_subgraph
# envFile: my_subgraph/.env.my_subgraph # Change the env file
envFile: my_subgraph/.env.my_subgraph.dev
includePaths:
- my_subgraph/metadata
```

Do a local supergraph build:

```ddn supergraph build local --output-dir ./engine```
```ddn supergraph build local```

Mutations and Queries will now be issued against your locally running connector instance.

Expand Down
2 changes: 1 addition & 1 deletion docs/support.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The documentation and community will help you troubleshoot most issues. If you have encountered a bug or need to get in touch with us, you can contact us using one of the following channels:
* Support & feedback: [Discord](https://discord.gg/hasura)
* Issue & bug tracking: [GitHub issues](https://github.com/hasura/ndc=[connectorName]/issues)
* Issue & bug tracking: [GitHub issues](https://github.com/hasura/ndc-duckdb/issues)
* Follow product updates: [@HasuraHQ](https://twitter.com/hasurahq)
* Talk to us on our [website chat](https://hasura.io)

Expand Down
4 changes: 2 additions & 2 deletions package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "duckdb-sdk",
"version": "0.0.22",
"version": "0.1.0",
"description": "",
"main": "index.js",
"scripts": {
Expand Down

0 comments on commit 580d059

Please sign in to comment.