diff --git a/content/images/UrbitOverview.png b/content/images/UrbitOverview.png
new file mode 100644
index 0000000..e3808b1
Binary files /dev/null and b/content/images/UrbitOverview.png differ
diff --git a/content/images/dns-sp-docs.png b/content/images/dns-sp-docs.png
new file mode 100644
index 0000000..0f616ba
Binary files /dev/null and b/content/images/dns-sp-docs.png differ
diff --git a/content/images/network-website.svg b/content/images/network-website.svg
new file mode 100644
index 0000000..af4e599
--- /dev/null
+++ b/content/images/network-website.svg
@@ -0,0 +1,4 @@
+
+
+
+
\ No newline at end of file
diff --git a/content/images/roller-agents.png b/content/images/roller-agents.png
new file mode 100644
index 0000000..4a108dc
Binary files /dev/null and b/content/images/roller-agents.png differ
diff --git a/content/service-provider/_index.en.md b/content/service-provider/_index.en.md
new file mode 100644
index 0000000..b71cd60
--- /dev/null
+++ b/content/service-provider/_index.en.md
@@ -0,0 +1,7 @@
+---
+title: "Service Provider"
+description: "Becoming a service provider on the Laconic Network"
+weight: 6
+---
+
+TODO
diff --git a/content/service-provider/become-an-sp.md b/content/service-provider/become-an-sp.md
new file mode 100644
index 0000000..7856a13
--- /dev/null
+++ b/content/service-provider/become-an-sp.md
@@ -0,0 +1,707 @@
+---
+title: "Become a Service Provider"
+date: 2022-12-30T09:19:28-05:00
+draft: false
+weight: 1
+---
+
+## Select and boot servers
+
+Using your choice of cloud provider or bare metal. These are minimum suggested specifications:
+
+- daemon (4G RAM, 25G Disk)
+- control (16G RAM, 300G Disk)
+- worker (16G RAM, 300G Disk)
+
+## Access control
+
+This is personal preference. At a minimum, create a new user on each machine and disable root access.
+
+## Initial Ubuntu base setup
+
+**On all remote machines:**
+
+1. Set unique hostnames
+
+```
+hostnamectl set-hostname changeme
+```
+
+In the following example, we've named each machine like so:
+```
+lx-daemon 23.111.69.218
+lx-cad-cluster-worker 23.111.78.182
+lx-cad-cluster-control 23.111.78.179
+```
+
+See below for the full list of DNS records to be configured.
+
+2. Next, update base packages:
+
+```
+apt update && apt upgrade -y
+apt autoremove
+```
+
+3. Install additional packages:
+
+```
+apt install -y doas zsh tmux git jq acl curl wget netcat-traditional fping rsync htop iotop iftop tar less firewalld sshguard wireguard iproute2 iperf3 zfsutils-linux net-tools ca-certificates gnupg
+```
+
+4. Verify status of firewalld and enable sshguard:
+
+```
+systemctl enable --now firewalld
+systemctl enable --now sshguard
+```
+
+5. Disable and remove snapd
+
+```
+systemctl disable snapd.service
+systemctl disable snapd.socket
+systemctl disable snapd.seeded
+systemctl disable snapd.snap-repair.timer
+
+apt purge -y snapd
+
+rm -rf ~/snap /snap /var/snap /var/lib/snapd
+```
+
+## Daemon-only (skip this step for worker and control nodes)
+
+1. Create a new user `so`:
+
+```
+useradd so
+```
+
+- add the ssh key of your local machine to `/home/so/.ssh/authorized_keys`
+- add the ssh key used for gitea access to the `ssh-agent`
+
+
+2. Install nginx and certbot:
+
+```
+apt install -y nginx certbot python3-certbot-nginx
+```
+
+3. Install Docker:
+
+```
+install -m 0755 -d /etc/apt/keyrings
+curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
+chmod a+r /etc/apt/keyrings/docker.gpg
+
+echo \
+ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
+ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
+
+apt update -y && apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
+```
+
+## Get a domain
+
+In this example, we are using audubon.app and its [nameservers point to Digital Ocean](https://docs.digitalocean.com/products/networking/dns/getting-started/dns-registrars/). You'll need to do the same.
+
+
+## Ansible Playbook to setup a simple k8s cluster
+
+The steps in this section should be completed on a fourth seperate machine (e.g., your laptop). If it's a Mac, ensure you are logged in as the first user that was created.
+
+1. Install ansible via virtual env
+
+```
+sudo apt install python3-pip python3.10-venv
+python3.10 -m venv ~/.local/venv/ansible
+source ~/.local/venv/ansible/bin/activate
+pip install ansible
+ansible --version
+```
+
+2. Install stack orchestrator
+
+```
+curl -L -o ~/bin/laconic-so https://git.vdb.to/cerc-io/stack-orchestrator/releases/download/latest/laconic-so
+chmod +x ~/bin/laconic-so
+laconic-so version
+```
+
+
+2. Clone repo and enter the directory
+
+```
+git clone https://git.vdb.to/cerc-io/service-provider-template.git
+cd service-provider-template/
+```
+
+3. Update to the template
+
+- review [this commit](https://git.vdb.to/cerc-io/service-provider-template/commit/32e1ad0bd73f0754c0978c96eaee526fa841ddb4) and modify the domain, IP, and hostnames, etc., to match your setup.
+
+4. Install required roles
+
+```
+ansible-galaxy install -f -p roles -r roles/requirements.yml
+```
+
+5. Setup ansible vault
+
+a) supply the following
+ - DO token
+ - PGP key
+ - SSH key
+
+whereby the latter 2 are on yout local machine.
+
+b) do the other thing
+
+6. Generate token for the cluster
+
+```
+./roles/k8s/files/token-vault.sh ./group_vars/lx_cad/k8s-vault.yml
+```
+
+this creates your `kubeconfig.yml` in (where?)
+TODO, confirm the above.
+
+6. Configure firewalld and nginx for hosts
+
+```
+ansible-playbook -i hosts site.yml --tags=firewalld,nginx --user
+```
+
+7. Install Stack Orchestrator for hosts
+
+```
+ansible-playbook -i hosts site.yml --tags=so --limit=so --user
+```
+
+8. Deploy k8s to hosts
+
+```
+ansible-playbook -i hosts site.yml --tags=k8s --limit=lx_cad --user
+```
+
+**Note:** For debugging, to undeploy, add `--extra-vars 'k8s_action=destroy'` to the above command.
+
+9. Install k8s helper tools
+
+- on Linux systems:
+```
+sudo ~/lx-cad-deploy/roles/k8s/files/get-kube-tools.sh
+```
+
+- on a Mac:
+```
+brew install kubie kubectl yq helm
+```
+
+10. Verify cluster creation
+
+```
+kubie ctx default
+kubectl get nodes -o wide
+kubectl get secrets --all-namespaces
+kubectl get clusterissuer
+kubectl get certificates
+kubectl get ds --all-namespaces
+```
+
+TODO tidy this section up
+
+### Set ingress annotations
+
+```
+kubectl annotate ingress laconic-26cc70be8a3db3f4-ingress nginx.ingress.kubernetes.io/proxy-body-size=0
+kubectl annotate ingress laconic-26cc70be8a3db3f4-ingress nginx.ingress.kubernetes.io/proxy-read-timeout=600
+kubectl annotate ingress laconic-26cc70be8a3db3f4-ingress nginx.ingress.kubernetes.io/proxy-send-timeout=600
+```
+
+where `laconic-26cc70be8a3db3f4` is your unique `cluster-id`
+
+Note: this will be handled by SO in [this issue](https://git.vdb.to/cerc-io/stack-orchestrator/issues/849).
+
+## Configure DNS
+
+As mentioned, point your nameservers to DO. Integration with other providers is possible; we use DO as an example. Recall that your DO token isadded to the ansible vault.
+
+Like this:
+
+| Type | Hostname | Value |
+|--------|------------------------------------|------------------------------------|
+| A | lx-daemon.audubon.app | 23.111.69.218 |
+| A | lx-cad-cluster-worker.audubon.app | 23.111.78.182 |
+| A | lx-cad-cluster-control.audubon.app | 23.111.78.179 |
+| NS | audubon.app | ns1.digitalocean.com. |
+| NS | audubon.app | ns2.digitalocean.com. |
+| NS | audubon.app | ns3.digitalocean.com. |
+| CNAME | www.audubon.app | audubon.app. |
+| CNAME | laconicd.audubon.app | lx-daemon.audubon.app. |
+| CNAME | lx-backend.audubon.app | lx-daemon.audubon.app. |
+| CNAME | lx-console.audubon.app | lx-daemon.audubon.app. |
+| CNAME | lx-cad.audubon.app | lx-cad-cluster-worker.audubon.app. |
+| CNAME | *.lx-cad.audubon.app | lx-cad-cluster-worker.audubon.app. |
+| CNAME | pwa.audubon.app | lx-cad-cluster-worker.audubon.app. |
+| CNAME | *.pwa.audubon.app | lx-cad-cluster-worker.audubon.app. |
+
+In DigitalOcean, it looks like:
+
+
+
+
+## Nginx and SSL
+
+If your initial ansible configuration was modified correctly; nginx and SSL will work. The k8s cluster was created with the features and settings for these components to be automated.
+
+## Stack Orchestrator
+- installed on the daemon machine for use by the deployer
+- installed on your local machine
+
+### Deploy container registry
+
+This will be the first test that everything is configured correctly.
+
+```
+laconic-so --stack container-registry deploy init --output container-registry.spec
+laconic-so --stack container-registry deploy create --deployment-dir container-registry --spec-file container-registry.spec
+```
+
+The above commands created a new directory; `container-registry`. It looks like:
+
+```
+$ ls
+compose/ config.env data/ deployment.yml pods/ spec.yml stack.yml
+```
+and we need to make a few modifications:
+
+the file `container-registry/compose/docker-compose-container-registry.yml` should look like:
+
+```
+services:
+ registry:
+ image: docker.io/library/registry:2.8
+ restart: always
+ environment:
+ REGISTRY_LOG_LEVEL: ${REGISTRY_LOG_LEVEL}
+ volumes:
+ - config:/config:ro
+ - registry-data:/var/lib/registry
+ ports:
+ - '5000'
+volumes:
+ config:
+ registry-data:
+```
+
+the `container-registry/spec.yaml` should look like:
+
+```
+stack: container-registry
+deploy-to: k8s
+kube-config: /home/so/.kube/config-mito-lx-cad.yaml
+network:
+ ports:
+ registry:
+ - '5000'
+ http-proxy:
+ - host-name: container-registry.pwa.audubon.app
+ routes:
+ - path: '/'
+ proxy-to: registry:5000
+volumes:
+ registry-data:
+configmaps:
+ config: ./configmaps/config
+```
+
+copy in the kubectl file:
+```
+cp /home/so/.kube/config-mito-lx-cad.yaml
+container-registry/kubeconfig.yml
+```
+
+### Htpasswd
+
+delete the `container-registry/data` directory
+and create a new `container-registry/configmaps/config/` directory which will contain the htpasswd file
+
+create the `htpasswd` file:
+
+```
+htpasswd -b -c container-registry/configmaps/config/htpasswd so-reg-user pXDwO5zLU7M88x3aA
+```
+
+the resulting file should look like:
+
+```
+so-reg-user:$2y$05$Eds.WkuUgn6XFUL8/NKSt.JTX.gCuXRGQFyJaRit9HhrUTsVrhH.W
+```
+
+Next, configure the file `container-registry/config.env` like this:
+
+```
+REGISTRY_AUTH=htpasswd
+REGISTRY_AUTH_HTPASSWD_REALM="Audubon Registry"
+REGISTRY_AUTH_HTPASSWD_PATH="/config/htpasswd"
+REGISTRY_HTTP_SECRET='$2y$05$Eds.WkuUgn6XFUL8/NKSt.JTX.gCuXRGQFyJaRit9HhrUTsVrhH.W'
+```
+
+using these credentials, create a `container-registry/my_password.json` that looks like:
+
+```
+{
+ "auths": {
+ "container-registry.pwa.audubon.app": {
+ "username": "so-reg-user",
+ "password": "$2y$05$Eds.WkuUgn6XFUL8/NKSt.JTX.gCuXRGQFyJaRit9HhrUTsVrhH.W",
+ "auth": "c28tcmVnLXVzZXI6cFhEd081ekxVN004OHgzYUE="
+ }
+ }
+}
+```
+where the `auth:` field is the output of:
+```
+echo -n "so-reg-user:pXDwO5zLU7M88x3aA" | base64 -w0
+```
+
+Finally, add the container registry credentials as a secret available to the cluster:
+
+```
+kubectl create secret generic laconic-registry --from-file=.dockerconfigjson=container-registry/my_password.json --type=kubernetes.io/dockerconfigjson
+```
+
+And deploy it:
+
+```
+laconic-so deployment --dir container-registry start
+```
+
+Check the logs:
+```
+TODO
+```
+
+With a successful container registry deployed, it is now possible to deploy webapps to the cluster
+
+#### Deploy a test app
+
+```
+git clone git@git.vdb.to:cerc-io/test-progressive-web-app.git ~/cerc/test-progressive-web-app
+laconic-so build-webapp --source-repo ~/cerc/test-progressive-web-app
+```
+explain
+```
+laconic-so deploy-webapp create --kube-config /home/so/.kube/config-mito-lx-cad.yaml --image-registry container-registry.pwa.audubon.app --deployment-dir webapp-k8s-deployment --image cerc/test-progressive-web-app:local --url https://my-test-app.pwa.audubon.app --env-file ~/cerc/test-progressive-web-app/.env
+```
+explain
+```
+laconic-so deployment --dir webapp-k8s-deployment push-images
+laconic-so deployment --dir webapp-k8s-deployment start
+```
+If everything worked, after a couple minutes, you should see a pod for this webapp and the webapp running at https://my-test-app.pwa.audubon.app
+
+### Deploy the laconicd registry and console
+
+Follow the instructions in [this document](https://git.vdb.to/cerc-io/stack-orchestrator/src/branch/main/docs/laconicd-with-console.md)
+
+After publishing sample records, you'll have a `bondId`. Also retreive your `userKey` (private key) which will be required later.
+
+#### Set name authority
+
+```
+laconic -c $LACONIC_CONFIG cns authority reserve my-org-name
+laconic -c $LACONIC_CONFIG cns authority bond set my-org-name 0e9176d854bc3c20528b6361aab632f0b252a0f69717bf035fa68d1ef7647ba7
+```
+
+where `my-org-name` needs to be added to the `package.json` of any application deployed under this namespace. For example:
+
+```
+"name": "@my-org-name/my-application"
+```
+
+
+### Deploy deployer back end
+
+This service listens for `ApplicationDeploymentRequest`'s in the Laconic Registry and automatically deploys an application to the k8s cluster, eliminating the manual steps just taken with the test app.
+
+```
+laconic-so --stack webapp-deployer deploy init --output webapp-deployer.spec
+laconic-so --stack webapp-deployer deploy create --deployment-dir webapp-deployer --spec-file webapp-deployer.spec
+```
+Modify the contents of `webapp-deployer`
+
+`config.env`:
+
+```
+DEPLOYMENT_DNS_SUFFIX="pwa.audubon.app"
+DEPLOYMENT_RECORD_NAMESPACE="mito"
+IMAGE_REGISTRY="container-registry.pwa.audubon.app"
+IMAGE_REGISTRY_USER="so-reg-user"
+IMAGE_REGISTRY_CREDS="pXDwO5zLU7M88x3aA"
+CLEAN_DEPLOYMENTS=false
+CLEAN_LOGS=false
+CLEAN_CONTAINERS=false
+SYSTEM_PRUNE=false
+WEBAPP_IMAGE_PRUNE=true
+CHECK_INTERVAL=5
+FQDN_POLICY="allow"
+```
+
+In `webapp-deployer/data/config/` there needs to be two files:
+ 1. `kube.yml` --> copied from `/home/so/.kube/config-mito-lx-cad.yaml`
+ 2. `laconic.yml` --> with the details for talking to laconicd
+
+The latter looks like:
+
+```
+services:
+ cns:
+ restEndpoint: 'https://lx-daemon.audubon.app:1317'
+ gqlEndpoint: 'https://lx-daemon.audubon.app/api'
+ userKey: e64ae9d07b21c62081b3d6d48e78bf44275ffe0575f788ea7b36f71ea559724b
+ bondId: ad9c977f4a641c2cf26ce37dcc9d9eb95325e9f317aee6c9f33388cdd8f2abb8
+ chainId: laconic_9000-1
+ gas: 9950000
+ fees: 500000aphoton
+ registry:
+ restEndpoint: 'https://lx-daemon.audubon.app:1317'
+ gqlEndpoint: 'https://lx-daemon.audubon.app/api'
+ userKey: e64ae9d07b21c62081b3d6d48e78bf44275ffe0575f788ea7b36f71ea559724b
+ bondId: ad9c977f4a641c2cf26ce37dcc9d9eb95325e9f317aee6c9f33388cdd8f2abb8
+ chainId: laconic_9000-1
+ gas: 9950000
+ fees: 500000aphoton
+```
+Deduplication of the `cns` and `registry` fields will happen with `laconic2d` but are required for now)
+
+Start up the deployer
+```
+laconic-so --stack webapp-deployer deployment start
+```
+
+Now, publishing records to the Laconic Registry will trigger deployments. See below for more details.
+
+### Deploy deployer UI
+
+To view the status and logs of deployments, we can deploy the UI
+
+```
+git clone git@git.vdb.to:cerc-io/webapp-deployment-status-ui.git ~/cerc/webcerc/webapp-deployment-status-ui
+laconic-so build-webapp --source-repo ~/cerc/webapp-deployment-status-ui
+```
+explain
+```
+laconic-so deploy-webapp create --kube-config /home/so/.kube/config-mito-lx-cad.yaml --image-registry container-registry.pwa.audubon.app --deployment-dir webapp-ui --image cerc/webapp-deployment-status-ui:local --url https://webapp-deployer-ui.pwa.audubon.app --env-file ~/cerc/webcerc/webapp-deployment-status-ui/.env
+```
+explain
+```
+laconic-so deployment --dir webapp-ui push-images
+laconic-so deployment --dir webapp-ui start
+```
+
+Now view https://webapp-deployer-ui.pwa.audubon.app for the status and logs of each deployment
+
+## Result
+
+We now have:
+
+- https://lx-console.audubon.app displays registry records (webapp deployments)
+- https://container-registry.pwa.audubon.app hosts docker images used by webapp deployments
+- https://webapp-deployer-api.pwa.audubon.app listens for ApplicationDeploymentRequest and runs `laconic-so deploy-webapp-from-registry` behind the scenes
+- https://webapp-deployer-ui.pwa.audubon.app displays status and logs for webapps deployed via the Laconic Registry
+- https://my-test-app.pwa.audubon.app as an example webapp deployment (but not deployed via the registry)
+
+Let's take a look at how to configure CI/CD workflow to deploy webapps via from the registry.
+
+## Publishing webapps
+
+1. Create a `.gitea/workflows/publish.yml` that looks like:
+
+```
+name: Publish ApplicationRecord to Registry
+on:
+ release:
+ types: [published]
+
+env:
+ CERC_REGISTRY_USER_KEY: ${{ secrets.CICD_LACONIC_USER_KEY }}
+ CERC_REGISTRY_BOND_ID: ${{ secrets.CICD_LACONIC_BOND_ID }}
+
+jobs:
+ cns_publish:
+ runs-on: ubuntu-latest
+ steps:
+ - name: "Clone project repository"
+ uses: actions/checkout@v3
+ - name: Use Node.js
+ uses: actions/setup-node@v3
+ with:
+ node-version: 18
+ - name: "Enable Yarn"
+ run: corepack enable
+ - name: "Install registry CLI"
+ run: |
+ npm config set @cerc-io:registry https://git.vdb.to/api/packages/cerc-io/npm/
+ npm install -g @cerc-io/laconic-registry-cli
+ - name: "Install jq"
+ run: apt -y update && apt -y install jq
+ - name: "Publish Application Record"
+ run: scripts/publish-app-record.sh
+ - name: "Request Deployment"
+ run: scripts/request-app-deployment.sh
+```
+and set the `envs` using the `userKey` and `bondId` that were previously created.
+
+2. Add two scripts that each look like:
+```
+#!/bin/bash
+
+# scripts/publish-app-record.sh
+set -e
+
+RECORD_FILE=tmp.rf.$$
+CONFIG_FILE=`mktemp`
+
+CERC_APP_TYPE=${CERC_APP_TYPE:-"webapp/next"}
+CERC_REPO_REF=${CERC_REPO_REF:-${GITHUB_SHA:-`git log -1 --format="%H"`}}
+CERC_IS_LATEST_RELEASE=${CERC_IS_LATEST_RELEASE:-"true"}
+
+rcd_name=$(jq -r '.name' package.json | sed 's/null//')
+rcd_desc=$(jq -r '.description' package.json | sed 's/null//')
+rcd_repository=$(jq -r '.repository' package.json | sed 's/null//')
+rcd_homepage=$(jq -r '.homepage' package.json | sed 's/null//')
+rcd_license=$(jq -r '.license' package.json | sed 's/null//')
+rcd_author=$(jq -r '.author' package.json | sed 's/null//')
+rcd_app_version=$(jq -r '.version' package.json | sed 's/null//')
+
+cat < "$CONFIG_FILE"
+services:
+ cns:
+ restEndpoint: '${CERC_REGISTRY_REST_ENDPOINT:-http://console.laconic.com:1317}'
+ gqlEndpoint: '${CERC_REGISTRY_GQL_ENDPOINT:-http://console.laconic.com:9473/api}'
+ chainId: ${CERC_REGISTRY_CHAIN_ID:-laconic_9000-1}
+ gas: 950000
+ fees: 200000aphoton
+EOF
+
+next_ver=$(laconic -c $CONFIG_FILE cns record list --type ApplicationRecord --all --name "$rcd_name" 2>/dev/null | jq -r -s ".[] | sort_by(.createTime) | reverse | [ .[] | select(.bondId == \"$CERC_REGISTRY_BOND_ID\") ] | .[0].attributes.version" | awk -F. -v OFS=. '{$NF += 1 ; print}')
+
+if [ -z "$next_ver" ] || [ "1" == "$next_ver" ]; then
+ next_ver=0.0.1
+fi
+
+cat < "$RECORD_FILE"
+record:
+ type: ApplicationRecord
+ version: ${next_ver}
+ name: "$rcd_name"
+ description: "$rcd_desc"
+ homepage: "$rcd_homepage"
+ license: "$rcd_license"
+ author: "$rcd_author"
+ repository:
+ - "$rcd_repository"
+ repository_ref: "$CERC_REPO_REF"
+ app_version: "$rcd_app_version"
+ app_type: "$CERC_APP_TYPE"
+EOF
+
+
+cat $RECORD_FILE
+RECORD_ID=$(laconic -c $CONFIG_FILE cns record publish --filename $RECORD_FILE --user-key "${CERC_REGISTRY_USER_KEY}" --bond-id ${CERC_REGISTRY_BOND_ID} | jq -r '.id')
+echo $RECORD_ID
+
+if [ -z "$CERC_REGISTRY_APP_CRN" ]; then
+ authority=$(echo "$rcd_name" | cut -d'/' -f1 | sed 's/@//')
+ app=$(echo "$rcd_name" | cut -d'/' -f2-)
+ CERC_REGISTRY_APP_CRN="crn://$authority/applications/$app"
+fi
+
+laconic -c $CONFIG_FILE cns name set --user-key "${CERC_REGISTRY_USER_KEY}" --bond-id ${CERC_REGISTRY_BOND_ID} "$CERC_REGISTRY_APP_CRN@${rcd_app_version}" "$RECORD_ID"
+laconic -c $CONFIG_FILE cns name set --user-key "${CERC_REGISTRY_USER_KEY}" --bond-id ${CERC_REGISTRY_BOND_ID} "$CERC_REGISTRY_APP_CRN@${CERC_REPO_REF}" "$RECORD_ID"
+if [ "true" == "$CERC_IS_LATEST_RELEASE" ]; then
+ laconic -c $CONFIG_FILE cns name set --user-key "${CERC_REGISTRY_USER_KEY}" --bond-id ${CERC_REGISTRY_BOND_ID} "$CERC_REGISTRY_APP_CRN" "$RECORD_ID"
+fi
+
+rm -f $RECORD_FILE $CONFIG_FILE
+```
+and
+```
+#!/bin/bash
+
+set -e
+
+RECORD_FILE=tmp.rf.$$
+CONFIG_FILE=`mktemp`
+
+rcd_name=$(jq -r '.name' package.json | sed 's/null//' | sed 's/^@//')
+rcd_app_version=$(jq -r '.version' package.json | sed 's/null//')
+
+cat < "$CONFIG_FILE"
+services:
+ cns:
+ restEndpoint: '${CERC_REGISTRY_REST_ENDPOINT:-http://console.laconic.com:1317}'
+ gqlEndpoint: '${CERC_REGISTRY_GQL_ENDPOINT:-http://console.laconic.com:9473/api}'
+ chainId: ${CERC_REGISTRY_CHAIN_ID:-laconic_9000-1}
+ gas: 950000
+ fees: 200000aphoton
+EOF
+
+if [ -z "$CERC_REGISTRY_APP_CRN" ]; then
+ authority=$(echo "$rcd_name" | cut -d'/' -f1 | sed 's/@//')
+ app=$(echo "$rcd_name" | cut -d'/' -f2-)
+ CERC_REGISTRY_APP_CRN="crn://$authority/applications/$app"
+fi
+
+APP_RECORD=$(laconic -c $CONFIG_FILE cns name resolve "$CERC_REGISTRY_APP_CRN" | jq '.[0]')
+if [ -z "$APP_RECORD" ] || [ "null" == "$APP_RECORD" ]; then
+ echo "No record found for $CERC_REGISTRY_APP_CRN."
+ exit 1
+fi
+
+cat < "$RECORD_FILE"
+record:
+ type: ApplicationDeploymentRequest
+ version: 1.0.0
+ name: "$rcd_name@$rcd_app_version"
+ application: "$CERC_REGISTRY_APP_CRN@$rcd_app_version"
+ dns: "$CERC_REGISTRY_DEPLOYMENT_SHORT_HOSTNAME"
+ deployment: "$CERC_REGISTRY_DEPLOYMENT_CRN"
+ config:
+ env:
+ CERC_WEBAPP_DEBUG: "$rcd_app_version"
+ meta:
+ note: "Added by CI @ `date`"
+ repository: "`git remote get-url origin`"
+ repository_ref: "${GITHUB_SHA:-`git log -1 --format="%H"`}"
+EOF
+
+cat $RECORD_FILE
+RECORD_ID=$(laconic -c $CONFIG_FILE cns record publish --filename $RECORD_FILE --user-key "${CERC_REGISTRY_USER_KEY}" --bond-id ${CERC_REGISTRY_BOND_ID} | jq -r '.id')
+echo $RECORD_ID
+
+rm -f $RECORD_FILE $CONFIG_FILE
+```
+
+Now, anytime a release is created, a new set of records will be published to the Laconic Registry, and eventually picked up by the `deployer`, which will target the k8s cluster that was setup.
+
+**Note:** to override stack orchestrator's default webapp build process, put a file named `build-webapp.sh` in the root of your webapp repo.
+
+## Notes, debugging, unknowns
+
+- using `container-registry.pwa.audubon.app/laconic-registry` or `container-registry.pwa.audubon.app` seems to both work, TODO, investigate
+
+
+### DNS Secret example
+
+```
+apiVersion: v1
+data:
+ access-token: XXX
+kind: Secret
+metadata:
+ name: someprovider-dns
+ namespace: cert-manager
+```
diff --git a/content/urbit/_index.en.md b/content/urbit/_index.en.md
new file mode 100644
index 0000000..2c771cc
--- /dev/null
+++ b/content/urbit/_index.en.md
@@ -0,0 +1,7 @@
+---
+title: "Urbit"
+description: "TODO"
+weight: 6
+---
+
+Describe Urbit
diff --git a/content/urbit/azimuth.md b/content/urbit/azimuth.md
new file mode 100644
index 0000000..0cce96d
--- /dev/null
+++ b/content/urbit/azimuth.md
@@ -0,0 +1,222 @@
+---
+title: "Azimuth PKI"
+date: 2022-12-30T09:19:28-05:00
+draft: false
+weight: 2
+---
+
+Azimuth is the public-key infrastructure used for Urbit identities, deployed as [smart contracts](https://github.com/urbit/azimuth) on Ethereum. For a deep dive, the official documentation has an [in-depth reference](https://developers.urbit.org/reference/azimuth/azimuth).
+
+It currently [relies on events from Infura](https://developers.urbit.org/reference/azimuth/flow#eth-watcher) (e.g., via eth-mainnet.urbit.org), as seen in this diagram:
+
+
+
+Ideally, this core component of the Urbit stack would not rely on centralized entities. Additionally, events are not verifiable, which defeats the purpose of certain applications that rely on them.
+
+The problem of "getting data from Ethereum" is not unique to Urbit and plagues nearly all applications that rely on blockchain data. Users and Dapp developers either run a full archive node (tricky and expensive) or rely on centralized service providers (easy and expensive). This is one reason why Laconic created the [watcher framework](https://github.com/cerc-io/watcher-ts/), which significantly reduces the cost of reading and verifying blockchain data. This framework was used to create the Azimuth Watcher, which provides a GraphQL interface for querying the Azimuth contracts' state.
+
+## Usage
+
+The Azimuth Watcher is open source (see below) but requires good hardware. It is here hosted for your convenience:
+
+- https://azimuth.dev.vdb.to/graphql
+
+### Queries
+
+- range can be maximum 1000 blocks
+- includes full history from contract genesis
+
+[Example query from contract genesis](https://azimuth.dev.vdb.to/graphql?query=%7B%0A++azimuthEventsInRange%28fromBlockNumber%3A+%0A6784880%2C+toBlockNumber%3A+%0A6785880%29+%7B%0A++++block+%7B%0A++++++hash%0A++++++timestamp%0A++++%7D%0A++++event+%7B%0A++++++...+on+OwnerChangedEvent+%7B%0A++++++++__typename%0A++++++++owner%0A++++++++point%0A++++++%7D%0A++++++...+on+ActivatedEvent+%7B%0A++++++++__typename%0A++++++++point%0A++++++%7D%0A++++++...+on+SpawnedEvent+%7B%0A++++++++__typename%0A++++++++child%0A++++++%7D%0A++++%7D%0A++++contract%0A++%7D%0A%7D)
+
+```
+{
+ azimuthEventsInRange(fromBlockNumber:
+6784880, toBlockNumber:
+6785880) {
+ block {
+ hash
+ timestamp
+ }
+ event {
+ ... on OwnerChangedEvent {
+ __typename
+ owner
+ point
+ }
+ ... on ActivatedEvent {
+ __typename
+ point
+ }
+ ... on SpawnedEvent {
+ __typename
+ child
+ }
+ }
+ contract
+ }
+}
+```
+
+[Recent query](https://azimuth.dev.vdb.to/graphql?query=%7B%0A++azimuthEventsInRange%28fromBlockNumber%3A+%0A18664121%2C+toBlockNumber%3A+%0A18664122%29+%7B%0A++++block+%7B%0A++++++hash%0A++++++timestamp%0A++++%7D%0A++++event+%7B%0A++++++...+on+OwnerChangedEvent+%7B%0A++++++++__typename%0A++++++++owner%0A++++++++point%0A++++++%7D%0A++++++...+on+ActivatedEvent+%7B%0A++++++++__typename%0A++++++++point%0A++++++%7D%0A++++++...+on+SpawnedEvent+%7B%0A++++++++__typename%0A++++++++child%0A++++++%7D%0A++++%7D%0A++++contract%0A++%7D%0A%7D)
+
+```
+{
+ azimuthEventsInRange(fromBlockNumber:
+18664121, toBlockNumber:
+18664122) {
+ block {
+ hash
+ timestamp
+ }
+ event {
+ ... on OwnerChangedEvent {
+ __typename
+ owner
+ point
+ }
+ ... on ActivatedEvent {
+ __typename
+ point
+ }
+ ... on SpawnedEvent {
+ __typename
+ child
+ }
+ }
+ contract
+ }
+}
+```
+
+### Websocket Subscriptions
+
+#### With Console
+
+go to: https://azimuth.dev.vdb.to/azimuth/graphql
+
+try:
+```
+ subscription MySubscription {
+ onEvent {
+ contract
+ event {
+ ... on OwnerChangedEvent {
+ owner
+ point
+ }
+ __typename
+ }
+ proof {
+ data
+ }
+ }
+ }
+```
+
+#### In an app
+
+in, e.g., `azimuth.js`:
+
+```
+// Reference: https://github.com/enisdenjo/graphql-ws/tree/v5.12.0#use-the-client
+const { createClient } = require('graphql-ws');
+const WebSocket = require('ws');
+
+const client = createClient({
+ url: 'wss://azimuth.dev.vdb.to/azimuth/graphql',
+ webSocketImpl: WebSocket
+});
+
+// subscription
+(async () => {
+ const onNext = (value) => {
+ /* handle incoming values */
+ console.log('Received new data:', JSON.stringify(value, null, 2));
+ };
+
+ let unsubscribe = () => {
+ /* complete the subscription */
+ console.log('subscription completed')
+ };
+
+ const query = `
+ subscription MySubscription {
+ onEvent {
+ contract
+ event {
+ ... on OwnerChangedEvent {
+ owner
+ point
+ }
+ __typename
+ }
+ }
+ }
+ `;
+
+ try {
+ await new Promise((resolve, reject) => {
+ unsubscribe = client.subscribe(
+ { query },
+ {
+ next: onNext,
+ error: reject,
+ complete: resolve,
+ },
+ );
+ });
+ } catch (err) {
+ console.error(err);
+ }
+})();
+```
+
+then run:
+```
+node azimuth.js
+```
+
+example responses:
+```
+Received new data: {
+ "data": {
+ "onEvent": {
+ "contract": "0x223c067F8CF28ae173EE5CafEa60cA44C335fecB",
+ "event": {
+ "owner": "0xCfB830a6ffBC26e847ec40533e102528F7F9D345",
+ "point": "2658108823",
+ "__typename": "OwnerChangedEvent"
+ }
+ }
+ }
+}
+Received new data: {
+ "data": {
+ "onEvent": {
+ "contract": "0x223c067F8CF28ae173EE5CafEa60cA44C335fecB",
+ "event": {
+ "__typename": "BrokeContinuityEvent"
+ }
+ }
+ }
+}
+Received new data: {
+ "data": {
+ "onEvent": {
+ "contract": "0x223c067F8CF28ae173EE5CafEa60cA44C335fecB",
+ "event": {
+ "__typename": "ChangedKeysEvent"
+ }
+ }
+ }
+}
+```
+
+## DIY
+
+- View the source code [here](https://github.com/cerc-io/azimuth-watcher-ts).
+- Use Stack Orchestrator to run the Azimuth Watcher [stack](https://github.com/cerc-io/stack-orchestrator/tree/main/app/data/stacks/azimuth).
+
+## Future Work
+
+This initial implementation was funded retroactively by the Urbit Foundation. Additional funding is available for a Hoon developer to integrate this source of Azimuth data as an option when running UrbitOS. Contact ~labtul-moltev if you are interested in tackling this task.
diff --git a/content/urbit/defi-frontends.md b/content/urbit/defi-frontends.md
new file mode 100644
index 0000000..66d24c7
--- /dev/null
+++ b/content/urbit/defi-frontends.md
@@ -0,0 +1,665 @@
+---
+title: "DeFi Front Ends on Urbit"
+date: 2022-12-30T09:19:28-05:00
+draft: false
+weight: 2
+---
+
+TL:DR; If already on Urbit, run `|install ~lanfyn-dasnys %uniswap` from the Dojo or use Landscape to search for it. The following guide outlines how to modify - then automate the deployment of - any web3 application on Urbit, using Uniswap as an example.
+
+Applications in web3 still remain significantly centralized. Usually, the blockchain used by a particular application cannot easily be censored. However, front ends are always served to users via a centralized pipeline of service providers, each of which is a choke point for that application.
+
+On Urbit, an application you install on your ship is always available to you. It will be available to others on the network if you publish it to your ship. However, it will only remain available to others if your ship is online. What was once an easy to install application can easily become unavailable. Indeed, this is what happened to the Osmosis and Uniswap front ends on Urbit.
+
+Laconic provides a various solutions to existing web3 applications that require increased robustness and jurisdictional diversity (read: decentralization), thus the natural alignment with Urbit. The following guide uses Laconic's 'Stack Orchestrator' tool to demonstrate the ease with which any web3 application front end can be easily integrated into existing CI/CD workflows, in order to publish and maintain a front end on Urbit.
+
+Broadly, the steps are as such:
+
+- Modify app to conform with Urbit requirements
+- Generate and host a glob file
+- Publish app to your ship
+
+The first step is very much application dependent but only needs to be done once. Steps 2 and 3 are easily automated via familiar CI/CD pipelines. This tutorial outlines how to add any DeFi application to Urbit and gives an example using Uniswap. Background reading of these Urbit docs is helpful:
+
+- https://docs.urbit.org/manual/getting-started/get-id
+- https://docs.urbit.org/manual/getting-started/self-hosted/cli
+- https://docs.urbit.org/userspace/apps/guides/software-distribution
+- https://docs.urbit.org/userspace/apps/reference/dist (esp. the `glob` section)
+- https://docs.urbit.org/courses/app-school-full-stack/8-desk
+
+## Modify App
+
+Your app front end must comply with a variety of Urbit requirements. The first step is to ensure that you can compile a static build of your application. This build will be consumed by the Urbit `globulator` and these files will need to be located in the root directory of your Urbit ship (usually a planet). More on this later. You might already have `yarn build:static` or an existing way to generate a static build for your app. If not, ensure your application can run as intended when served as a static build.
+
+The next step is to ensure a few things about the files, paths, and URL in your app. First, all the uppercase letters in all the file names of your app need to be converted to lowercase. As well, ensure that the file paths and URL does not contain square brackets. Next.js can do awkward things in this regard, for example, with the dynamic path rendering.
+
+Next, you’ll need to generate Urbit mark files that are missing from the default `%landscape` desk. For simple applications, this should not be necessary. Every file extension in your application requires a corresponding mark file. These are short files written in Hoon that are required for the globulator to function correctly. By inspecting existing mark files, you should get a good idea as to what a new one needs to look like if your application contains any exotic file extensions.
+
+Finally, you’ll have to decide what to do with external API calls and other services that your app uses. In the case of Uniswap, we run a proxy server and forward requests to the original application. Other solutions to address this issue are outside the scope of this blog post.
+
+You can view the modifications required to the Uniswap front end in [our fork here](https://github.com/cerc-io/uniswap-interface/pulls?q=is%3Apr+is%3Aclosed). These changes aren’t upstreamed, therefore publishing new versions to Urbit would require manually rebasing and addressing any merge conflicts, followed by re-globbing and updating the `desk.docket-0` file, as described next. For your application, this process can easily be automated using existing CI/CD workflows.
+
+## Globulate
+
+The front end of an Urbit app is packaged up into something called a `glob`. This glob can either be served over http or ames. The Urbit documentation has great examples using ames and the globulator UI. For ease of automation, we use http and glob from a bash script that sends `curl` requests to a running fakezod.
+
+The static build of the application are the files that need to be globbed. A directory of these files needs to be located in the root of your ships directory. Under the hood, we’ve abstracted away the majority of this part.
+
+### Glob Hosting
+
+An http glob can be hosted wherever you want, like Amazon S3 or Digital Ocean Spaces Object Storage. The Laconic solution includes by default a locally running ipfs node to which the glob is published.
+
+### desk.docket-0
+
+The tile for each app that you see when logging into your Urbit is defined by the desk.docket-0. Therefore, adding CI/CD workflows to a traditional app for Urbit requires updating this file and re-publishing an application.
+
+## Install and Publish
+
+This part is ensuring that your desk.docket-0 is correct then running `|install our %uniswap` in the Urbit dojo. The app should now be available as a tile in Landscape. To make it available for others to install from your ship, run `:treaty|publish %uniswap` in the dojo.
+
+To separate the development and production workflows, we use a fakezod for reviewing and testing modifications, then when ready to publish to the network, use a deployment script directed at a live planet. That's how Uniswap was made available to anyone on the Urbit network. Run `|install ~lanfyn-dasnys %uniswap` in your dojo.
+
+## Demo
+
+### Install Stack Orchestrator
+
+```
+git clone https://github.com/cerc-io/stack-orchestrator.git
+cd stack-orchestrator
+./scripts/quick-install-linux.sh
+```
+
+press Y and follow the instructions at the end, then;
+
+```
+laconic-so version
+```
+
+should look like:
+
+```
+Version: 1.1.0-cef73d8-202401231732
+```
+
+With a handful of new concepts involved in Urbit app development, automated DeFi deployments happened to be a great fit for the Laconic Stack Orchestrator tool. We’ve distilled the above steps into a few commands that can be run by anyone on a stock Digital Ocean. The following instructions will build and deploy the Uniswap front end to a fakezod. `laconic-so` has specific "stacks" that are defined by a `stack.yml`. For the Uniswap Urbit App, it looks like this:
+
+```yaml
+version: "0.1"
+name: uniswap-urbit-app
+repos:
+ - github.com/cerc-io/uniswap-interface@laconic
+ - github.com/cerc-io/watcher-ts@v0.2.78
+containers:
+ - cerc/uniswap-interface
+ - cerc/watcher-ts
+pods:
+ - uniswap-interface
+ - proxy-server
+ - fixturenet-urbit
+ - kubo
+```
+
+We'll be building two docker images; one for the app and one for the proxy server. Urbit and Kubo (ipfs) are run using the default docker images.
+
+### Setup
+
+First, clone the required repositories:
+
+```
+laconic-so --stack uniswap-urbit-app setup-repositories
+```
+
+The output looks like:
+
+```
+Dev Root is: /root/cerc
+Dev root directory doesn't exist, creating
+Checking: /root/cerc/uniswap-interface: Needs to be fetched
+100%|#####################################################################################################################| 43.8k/43.8k [00:05<00:00, 7.77kB/s]
+switching to branch laconic in repo cerc-io/uniswap-interface
+Checking: /root/cerc/watcher-ts: Needs to be fetched
+100%|#####################################################################################################################| 12.2k/12.2k [00:01<00:00, 6.82kB/s]
+switching to branch v0.2.78 in repo cerc-io/watcher-ts
+```
+
+You can see we cloned the two repos and switched to the branch/tag/version specified in the `stack.yml`
+
+### Build
+
+```
+laconic-so --stack uniswap-urbit-app build-containers
+```
+
+This can take awhile and will produce a ton of output; if successful, you'll see something like:
+
+```
+Successfully built fa34caca25f1
+Successfully tagged cerc/watcher-ts:local
+```
+
+at the end.
+
+The `uniswap-interface` image is simple; it installs the dependencies for our modified version of Uniswap. The static build will be produced at deploy time.
+
+```
+FROM node:18.17.1-alpine3.18
+
+RUN apk --update --no-cache add git make alpine-sdk bash
+
+WORKDIR /app
+
+COPY . .
+
+RUN echo "Building uniswap-interface" && \
+ yarn
+```
+
+The `watcher-ts` image is used for the proxy server. By default, this proxy server re-directs requests back to Uniswap. This means that the Uniswap front end on Urbit requires api.uniswap.org to be up and running. The configuration also takes an optional Infura API Key, which would be required for power users or an increase in traffic using the application.
+
+### Create Deployment
+
+First, create a spec file for the deployment:
+
+```
+laconic-so --stack uniswap-urbit-app deploy init --output uniswap-urbit-app-spec.yml
+```
+
+edit `uniswap-urbit-app-spec.yml` so that it looks exactlylike:
+
+```yaml
+stack: uniswap-urbit-app
+deploy-to: compose
+network:
+ ports:
+ proxy-server:
+ - '4000:4000'
+ urbit-fake-ship:
+ - '8080:80'
+ ipfs:
+ - '4001'
+ - '8081:8080'
+ - 0.0.0.0:5001:5001
+volumes:
+ urbit_app_builds: ./data/urbit_app_builds
+ urbit_data: ./data/urbit_data
+ ipfs-import: ./data/ipfs-import
+ ipfs-data: ./data/ipfs-data
+```
+
+Save your changes then create a deployment from that file:
+
+```bash
+laconic-so --stack uniswap-urbit-app deploy create --spec-file uniswap-urbit-app-spec.yml --deployment-dir uniswap-urbit-app-deployment
+```
+
+open `uniswap-urbit-app-deployment/config.env` and set the following
+
+```bash
+# App to be installed (Do not change)
+CERC_URBIT_APP=uniswap
+
+# External RPC endpoints
+# https://docs.infura.io/getting-started#2-create-an-api-key
+# not required for demo
+CERC_INFURA_KEY=
+
+# Uniswap API GQL Endpoint
+# Set this to GQL proxy server endpoint for uniswap app
+# (Eg. http://localhost:4000/v1/graphql - in case stack is being run locally with proxy enabled)
+# (Eg. https://abc.xyz.com/v1/graphql - in case https://abc.xyz.com is pointed to the proxy endpoint)
+# replace `localhost` with the IP of your Digital Ocean droplet
+CERC_UNISWAP_GQL=http://localhost:4000/v1/graphql
+
+# Optional
+
+# Whether to enable app installation on Urbit
+# (just builds and uploads the glob file if disabled) (Default: true)
+CERC_ENABLE_APP_INSTALL=
+
+# Whether to run the proxy GQL server
+# (disable only if proxy not required to be run) (Default: true)
+CERC_ENABLE_PROXY=
+
+# Proxy server configuration
+# Used only if proxy is enabled
+
+# Upstream API URL
+# (Eg. https://api.example.org)
+CERC_PROXY_UPSTREAM=https://api.uniswap.org
+
+# Origin header to be used in the proxy
+# (Eg. https://app.example.org)
+CERC_PROXY_ORIGIN_HEADER=https://app.uniswap.org
+
+# IPFS configuration
+
+# IFPS endpoint to host the glob file on
+# (Default: http://ipfs:5001 pointing to in-stack IPFS node)
+CERC_IPFS_GLOB_HOST_ENDPOINT=
+
+# IFPS endpoint to fetch the glob file from
+# (Default: http://ipfs:8080 pointing to in-stack IPFS node)
+CERC_IPFS_SERVER_ENDPOINT=
+```
+
+Great, you can now start the stack with:
+
+```bash
+laconic-so deployment --dir uniswap-urbit-app-deployment start
+```
+
+It will take awhile (5-15 mins) to deploy, you can see progress with the following command:
+
+```
+laconic-so deployment --dir uniswap-urbit-app-deployment logs -f
+```
+
+See the [status](#status) below for details to confirm correct operation. Meanwhile, let's take a look at what is happening under the hood.
+
+For example, the `docker-compose.yml` for the fakezod that is about to be deployed looks like:
+
+```
+version: '3.7'
+
+services:
+ # Runs an Urbit fake ship and attempts an app installation using given data
+ # Uploads the app glob to given IPFS endpoint
+ # From urbit_app_builds volume:
+ # - takes app build from ${CERC_URBIT_APP}/build (waits for it to appear)
+ # - takes additional mark files from ${CERC_URBIT_APP}/mar
+ # - takes the docket file from ${CERC_URBIT_APP}/desk.docket-0
+ urbit-fake-ship:
+ restart: unless-stopped
+ image: tloncorp/vere
+ environment:
+ CERC_SCRIPT_DEBUG: ${CERC_SCRIPT_DEBUG}
+ CERC_URBIT_APP: ${CERC_URBIT_APP}
+ CERC_ENABLE_APP_INSTALL: ${CERC_ENABLE_APP_INSTALL:-true}
+ CERC_IPFS_GLOB_HOST_ENDPOINT: ${CERC_IPFS_GLOB_HOST_ENDPOINT:-http://ipfs:5001}
+ CERC_IPFS_SERVER_ENDPOINT: ${CERC_IPFS_SERVER_ENDPOINT:-http://ipfs:8080}
+ entrypoint: ["bash", "-c", "./run-urbit-ship.sh && ./deploy-app.sh && tail -f /dev/null"]
+ volumes:
+ - urbit_data:/urbit
+ - urbit_app_builds:/app-builds
+ - ../config/urbit/run-urbit-ship.sh:/urbit/run-urbit-ship.sh
+ - ../config/urbit/deploy-app.sh:/urbit/deploy-app.sh
+ ports:
+ - "80"
+ healthcheck:
+ test: ["CMD", "nc", "-v", "localhost", "80"]
+ interval: 20s
+ timeout: 5s
+ retries: 15
+ start_period: 10s
+
+volumes:
+ urbit_data:
+ urbit_app_builds:
+```
+
+On deploy of the above, a fakezod will be deployed and wait for the static build to be globbed. The glob file will be published to a locally running IPFS node, which will be referenced when updating the `desk.docket-0` file. Respectively, the scripts `run-urbit-ship.sh` and `deploy-app.sh` look like:
+
+```
+#!/bin/bash
+
+pier_dir="/urbit/zod"
+
+# Run urbit ship in daemon mode
+# Check if the directory exists
+if [ -d "$pier_dir" ]; then
+ echo "Pier directory already exists, rebooting..."
+ /urbit/zod/.run -d
+else
+ echo "Creating a new fake ship..."
+ urbit -d -F zod
+fi
+```
+
+and
+
+```
+#!/bin/bash
+
+if [ -z "$CERC_URBIT_APP" ]; then
+ echo "CERC_URBIT_APP not set, exiting"
+ exit 0
+fi
+
+echo "Creating Urbit application for ${CERC_URBIT_APP}"
+
+app_desk_dir=/urbit/zod/${CERC_URBIT_APP}
+if [ -d ${app_desk_dir} ]; then
+ echo "Desk dir already exists for ${CERC_URBIT_APP}, skipping deployment..."
+ exit 0
+fi
+
+app_build=/app-builds/${CERC_URBIT_APP}/build
+app_mark_files=/app-builds/${CERC_URBIT_APP}/mar
+app_docket_file=/app-builds/${CERC_URBIT_APP}/desk.docket-0
+
+echo "Reading app build from ${app_build}"
+echo "Reading additional mark files from ${app_mark_files}"
+echo "Reading docket file ${app_docket_file}"
+
+# Loop until the app's build appears
+while [ ! -d ${app_build} ]; do
+ echo "${CERC_URBIT_APP} app build not found, retrying in 5s..."
+ sleep 5
+done
+echo "Build found..."
+
+echo "Using IPFS endpoint ${CERC_IPFS_GLOB_HOST_ENDPOINT} for hosting the ${CERC_URBIT_APP} glob"
+echo "Using IPFS server endpoint ${CERC_IPFS_SERVER_ENDPOINT} for reading ${CERC_URBIT_APP} glob"
+ipfs_host_endpoint=${CERC_IPFS_GLOB_HOST_ENDPOINT}
+ipfs_server_endpoint=${CERC_IPFS_SERVER_ENDPOINT}
+
+# Fire curl requests to perform operations on the ship
+dojo () {
+ curl -s --data '{"source":{"dojo":"'"$1"'"},"sink":{"stdout":null}}' http://localhost:12321
+}
+
+hood () {
+ curl -s --data '{"source":{"dojo":"+hood/'"$1"'"},"sink":{"app":"hood"}}' http://localhost:12321
+}
+
+# Create / mount the app's desk
+hood "merge %${CERC_URBIT_APP} our %landscape"
+hood "mount %${CERC_URBIT_APP}"
+
+# Copy over build to desk data dir
+cp -r ${app_build} ${app_desk_dir}
+
+# Copy over the additional mark files (if required for your application)
+cp ${app_mark_files}/* ${app_desk_dir}/mar/
+
+# Remove unnecessary files
+rm "${app_desk_dir}/desk.bill"
+rm "${app_desk_dir}/desk.ship"
+
+# Commit changes and create a glob
+hood "commit %${CERC_URBIT_APP}"
+dojo "-landscape!make-glob %${CERC_URBIT_APP} /build"
+
+glob_file=$(ls -1 -c zod/.urb/put | head -1)
+echo "Created glob file: ${glob_file}"
+
+# Upload the glob file to IPFS (running locally by default)
+echo "Uploading glob file to ${ipfs_host_endpoint}"
+upload_response=$(curl -X POST -F file=@./zod/.urb/put/${glob_file} ${ipfs_host_endpoint}/api/v0/add)
+glob_cid=$(echo "$upload_response" | grep -o '"Hash":"[^"]*' | sed 's/"Hash":"//')
+
+glob_url="${ipfs_server_endpoint}/ipfs/${glob_cid}?filename=${glob_file}"
+glob_hash=$(echo "$glob_file" | sed "s/glob-\([a-z0-9\.]*\).glob/\1/")
+
+echo "Glob file uploaded to IFPS:"
+echo "{ cid: ${glob_cid}, filename: ${glob_file} }"
+echo "{ url: ${glob_url}, hash: ${glob_hash} }"
+
+# Exit if the installation not required
+if [ "$CERC_ENABLE_APP_INSTALL" = "false" ]; then
+ echo "CERC_ENABLE_APP_INSTALL set to false, skipping app installation"
+ exit 0
+fi
+
+# Curl and wait for the glob to be hosted
+echo "Checking if glob file hosted at ${glob_url}"
+while true; do
+ response=$(curl -sL -w "%{http_code}" -o /dev/null "$glob_url")
+
+ if [ $response -eq 200 ]; then
+ echo "File found at $glob_url"
+ break # Exit the loop if the file is found
+ else
+ echo "File not found, retrying in a 5s..."
+ sleep 5
+ fi
+done
+
+# Copy in the docket file and substitute the glob URL and hash
+cp ${app_docket_file} ${app_desk_dir}/
+sed -i "s|REPLACE_WITH_GLOB_URL|${glob_url}|g; s|REPLACE_WITH_GLOB_HASH|${glob_hash}|g" ${app_desk_dir}/desk.docket-0
+
+# Commit changes and install the app
+hood "commit %${CERC_URBIT_APP}"
+hood "install our %${CERC_URBIT_APP}"
+
+echo "${CERC_URBIT_APP} app installed"
+```
+
+Thus, once we start the stack, a loop waits for the static build to complete and the glob to be published, then finalizes installating of the application with the updated `desk.docket-0`.
+
+The `docker-compose.yml` for the uniswap-interface looks like:
+
+```
+version: "3.2"
+
+services:
+ uniswap-interface:
+ image: cerc/uniswap-interface:local
+ restart: on-failure
+ environment:
+ - REACT_APP_INFURA_KEY=${CERC_INFURA_KEY}
+ - REACT_APP_AWS_API_ENDPOINT=${CERC_UNISWAP_GQL}
+ command: ["./build-app.sh"]
+ volumes:
+ - ../config/uniswap-interface/build-app.sh:/app/build-app.sh
+ - urbit_app_builds:/app-builds
+ - ../config/uniswap-interface/urbit-files/mar:/app/mar
+ - ../config/uniswap-interface/urbit-files/desk.docket-0:/app/desk.docket-0
+
+volumes:
+ urbit_app_builds:
+```
+
+whereby `build-app.sh` looks like:
+
+```
+#!/bin/bash
+
+# Check and exit if a deployment already exists (for restarts)
+if [ -d /app-builds/uniswap/build ]; then
+ echo "Build already exists, remove volume to rebuild"
+ exit 0
+fi
+
+yarn build
+
+# Copy over build and other files to app-builds for urbit deployment
+mkdir -p /app-builds/uniswap
+cp -r ./build /app-builds/uniswap/
+
+cp -r mar /app-builds/uniswap/
+cp desk.docket-0 /app-builds/uniswap/
+```
+
+Throughout the process, `docker volumes` are used for availability of files across docker containers.
+
+There are three mark files that we needed to create and include (recall that any file extensions in the static build that the %landscape desk does not contain need to be added to your the desk for); `map.hoon`, `ttf.hoon`, and `woff.hoon`. See them [here](https://github.com/cerc-io/stack-orchestrator/tree/main/stack_orchestrator/data/config/uniswap-interface/urbit-files/mar) and this is what `woff.hoon` looks like:
+
+```
+|_ dat=octs
+++ grow
+ |%
+ ++ mime [/font/woff dat]
+ --
+++ grab
+ |%
+ ++ mime |=([=mite =octs] octs)
+ ++ noun octs
+ --
+++ grad %mime
+--
+```
+
+The `desk.docket-0` file is application specific; for Uniswap it looks like:
+
+```
+:~ title+'Uniswap'
+ info+'Self-hosted uniswap frontend.'
+ color+0xcd.75df
+ image+'https://logowik.com/content/uploads/images/uniswap-uni7403.jpg'
+ base+'uniswap'
+ glob-http+['REPLACE_WITH_GLOB_URL' REPLACE_WITH_GLOB_HASH]
+ version+[0 0 1]
+ website+'https://uniswap.org/'
+ license+'MIT'
+==
+```
+
+Recall from the script above, we use `sed` to populate the `glob-http` field. By using Stack Orchestrator, we've setup a process that is easily repeatable and automated, albeit somewhat tedious at first.
+
+### Status
+
+Depending on the specs of your machine, starting a deployment can take anywhere from 5-15 minutes.
+
+Recall that you can run the following to view logs from all processes as they come in:
+
+```
+laconic-so deployment --dir uniswap-urbit-app-deployment logs -f
+```
+
+Eventually, you'll see:
+
+```
+laconic-3ccf7ee79bdae874-urbit-fake-ship-1 | docket: fetching %http glob for %uniswap desk
+laconic-3ccf7ee79bdae874-urbit-fake-ship-1 | ">="">="uniswap app installed
+```
+
+which is great. Exit from following those logs then double check that everything is by running `docker ps`, all containers should be `healthy`.
+
+Fakezod's have the same default password of `lidlut-tabwed-pillex-ridrup` and you can confirm this by running the following command:
+
+```
+laconic-so deployment --dir uniswap-urbit-app-deployment exec urbit-fake-ship "curl -s --data '{\"source\":{\"dojo\":\"+code\"},\"sink\":{\"stdout\":null}}' http://localhost:12321"
+```
+
+Navigate to http://localhost:8080 and enter the password to login. You should see the Uniswap tile. If you have MetaMask installed in your browser, Uniswap should work. Congratulations, you've built and deployed the Uniswap front end to an Urbit fakezod.
+
+## Deploy and Automate
+
+After this step, we use this pair of scripts to publish the desk to our live ship `~lanfyn-dasnys`. You can do the same for your application and add it to your CI/CD workflows in order to publish the latest version of your app to your Urbit ship. The options here are of course endless; you could have a single ship hosting multiple versions of you app, easily browseable by any user.
+
+```
+#!/bin/bash
+
+# $1: Remote user host
+# $2: App name (eg. uniswap)
+# $3: Assets dir path (local) for app (eg. /home/user/myapp/urbit-files)
+# $4: Remote Urbit ship's pier dir path (eg. /home/user/zod)
+# $5: Glob file URL (eg. https://xyz.com/glob-0vabcd.glob)
+# $6: Glob file hash (eg. 0vabcd)
+
+if [ "$#" -ne 6 ]; then
+ echo "Incorrect number of arguments"
+ echo "Usage: $0 "
+ exit 1
+fi
+
+remote_user_host="$1"
+app_name=$2
+app_assets_folder=$3
+remote_pier_folder="$4"
+glob_url="$5"
+glob_hash="$6"
+
+installation_script="./install-urbit-app.sh"
+
+# Copy over the assets to remote machine in a tmp dir
+remote_app_assets_folder=/tmp/urbit-app-assets/$app_name
+ssh "$remote_user_host" "mkdir -p $remote_app_assets_folder"
+scp -r $app_assets_folder/* $remote_user_host:$remote_app_assets_folder
+
+# Run the installation script
+ssh "$remote_user_host" "bash -s $app_name $remote_app_assets_folder '${glob_url}' $glob_hash $remote_pier_folder" < "$installation_script"
+
+# Remove the tmp assets dir
+ssh "$remote_user_host" "rm -rf $remote_app_assets_folder"
+```
+
+the `./install-urbit-app.sh` looks like:
+
+```
+#!/bin/bash
+
+# $1: App name (eg. uniswap)
+# $2: Assets dir path (local) for app (eg. /home/user/myapp/urbit-files)
+# $3: Glob file URL (eg. https://xyz.com/glob-0vabcd.glob)
+# $4: Glob file hash (eg. 0vabcd)
+# $5: Urbit ship's pier dir (default: ./zod)
+
+if [ "$#" -lt 4 ]; then
+ echo "Insufficient arguments"
+ echo "Usage: $0 [/path/to/remote/pier/folder]"
+ exit 1
+fi
+
+app_name=$1
+app_mark_files=$2/mar
+app_docket_file=$2/desk.docket-0
+echo "Creating Urbit application for ${app_name}"
+echo "Reading additional mark files from ${app_mark_files}"
+echo "Reading docket file ${app_docket_file}"
+
+glob_url=$3
+glob_hash=$4
+echo "Using glob file from ${glob_url} with hash ${glob_hash}"
+
+# Default pier dir: ./zod
+# Default desk dir: ./zod/
+pier_dir="${5:-./zod}"
+app_desk_dir="${pier_dir}/${app_name}"
+echo "Using ${app_desk_dir} as the ${app_name} desk dir path"
+
+# Fire curl requests to perform operations on the ship
+hood () {
+ curl -s --data '{"source":{"dojo":"+hood/'"$1"'"},"sink":{"app":"hood"}}' http://localhost:12321
+}
+
+# Create / mount the app's desk
+hood "merge %${app_name} our %landscape"
+hood "mount %${app_name}"
+
+# Copy over the additional mark files
+cp ${app_mark_files}/* ${app_desk_dir}/mar/
+
+rm "${app_desk_dir}/desk.bill"
+rm "${app_desk_dir}/desk.ship"
+
+# Replace the docket file for app
+# Substitue the glob URL and hash
+cp ${app_docket_file} ${app_desk_dir}/
+sed -i "s|REPLACE_WITH_GLOB_URL|${glob_url}|g; s|REPLACE_WITH_GLOB_HASH|${glob_hash}|g" ${app_desk_dir}/desk.docket-0
+
+# Commit changes and install the app
+hood "commit %${app_name}"
+hood "install our %${app_name}"
+
+echo "${app_name} app installed"
+```
+
+## Summary
+
+In this guide, we've gone through all the considerations for maintaining an up to date Urbit deployment of any web3 application. We used Stack Orchestrator to reduce the number of files and processes to keep track of when automating repeated deployments to Urbit.
+
+To de-mystify the file structure of a "stack" in Stack Orchestrator, view [this Pull Request](TODO) which adds the Urbit Hello World example as a stack.
+
+## References
+
+- https://github.com/cerc-io/stack-orchestrator/blob/main/stack_orchestrator/data/stacks/uniswap-urbit-app/stack.yml
+- https://github.com/cerc-io/stack-orchestrator/tree/main/stack_orchestrator/data/config/urbit
+- https://github.com/cerc-io/stack-orchestrator/tree/main/stack_orchestrator/data/config/uniswap-interface
+- https://github.com/cerc-io/stack-orchestrator/blob/main/stack_orchestrator/data/compose/docker-compose-fixturenet-urbit.yml
+- https://github.com/cerc-io/stack-orchestrator/blob/main/stack_orchestrator/data/compose/docker-compose-uniswap-interface.yml
+- https://github.com/cerc-io/stack-orchestrator/blob/main/stack_orchestrator/data/compose/docker-compose-proxy-server.yml
+- https://github.com/cerc-io/stack-orchestrator/blob/main/stack_orchestrator/data/compose/docker-compose-kubo.yml
+- https://github.com/cerc-io/stack-orchestrator/tree/main/stack_orchestrator/data/container-build/cerc-uniswap-interface
+
+## Glossary
+
+- ship - a running Urbit
+- desk - "app" installed on a ship
+- planet - an UrbitID that runs as a ship
+- glob file - output after feedind glob files to the globulator
+- mark file - Urbit apps need a mark file for every file extension
+- landscape - a default app (desk) that comes with *most* mark files that you need.
+- Hoon - a programming language for Urbit; not relevant to this guide