diff --git a/README.md b/README.md index 9e6506fc04..5c4b4d58a6 100644 --- a/README.md +++ b/README.md @@ -1,182 +1,2 @@ - +# Page -
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=162304`}
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=162304`}
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=162304`}
+
+
+ /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+ /home/user/data/wd/secrets
directory, it:
+ {String.raw`--validator=`}
to the command
variable.nimbus_beacon_node
with the following arguments:deposits exit
: Exits validators$command
: The generated command string from the loop.--epoch=162304
: The epoch upon which to submit the voluntary exit.--rest-url=http://charon:3600/
: Specifies the Charon host:port
--data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+/home/user/nimbus_beacon_node deposits exit $command --epoch=162304 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.--data-dir=/opt/data
: Specifies the folder where the key stores were imported.--exitEpoch=162304
: The epoch upon which to submit the voluntary exit.--network=goerli
: Specifies the network.--yes
: Skips confirmation prompt.
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=162304 --network=goerli --yes'`}
+
+
+
+
+ cd charon-distributed-validator-node
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+
+ npm install --save @obolnetwork/obol-sdk
+
+
+ yarn add @obolnetwork/obol-sdk
+
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments: --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.--data-dir=/opt/data
: Specifies the folder where the key stores were imported.--exitEpoch=162304
: The epoch upon which to submit the voluntary exit.--network=goerli
: Specifies the network.--yes
: Skips confirmation prompt. {String.rawdocker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=162304 --network=goerli --yes'`}
+
+```
+
+Once a threshold of exit signatures has been received by any single charon client, it will craft a valid voluntary exit message and will submit it to the beacon chain for inclusion. You can monitor partial exits stored by each node in the [Grafana Dashboard](https://github.com/ObolNetwork/charon-distributed-validator-node).
+
+### Exit epoch and withdrawable epoch
+
+The process of a validator exiting from staking takes variable amounts of time, depending on how many others are exiting at the same time.
+
+Immediately upon broadcasting a signed voluntary exit message, the exit epoch and withdrawable epoch values are calculated based off the current epoch number. These values determine exactly when the validator will no longer be required to be online performing validation, and when the validator is eligible for a full withdrawal respectively.
+
+1. Exit epoch - epoch at which your validator is no longer active, no longer earning rewards, and is no longer subject to slashing rules. :::warning Up until this epoch (while "in the queue") your validator is expected to be online and is held to the same slashing rules as always. Do not turn your DV node off until this epoch is reached. :::
+2. Withdrawable epoch - epoch at which your validator funds are eligible for a full withdrawal during the next validator sweep. This occurs 256 epochs after the exit epoch, which takes \~27.3 hours.
+
+### How to verify a validator exit
+
+Consult the examples below and compare them to your validator's monitoring to verify that exits from each operator in the cluster are being received. This example is a cluster of 4 nodes with 2 validators and threshold of 3 nodes broadcasting exits are needed.
+
+1. Operator 1 broadcasts an exit on validator client 1.  
+2. Operator 2 broadcasts an exit on validator client 2.  
+3. Operator 3 broadcasts an exit on validator client 3.  
+
+At this point, the threshold of 3 has been reached and the validator exit process will start. The logs will show the following: 
+
+:::tip Once a validator has broadcasted an exit message, it must continue to validate for at least 27 hours or longer. Do not shut off your distributed validator nodes until your validator is fully exited. :::
diff --git a/docs/versioned_docs/version-v0.17.0/int/quickstart/quickstart-mainnet.md b/docs/versioned_docs/version-v0.17.0/int/quickstart/quickstart-mainnet.md
new file mode 100644
index 0000000000..7938111fe3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/int/quickstart/quickstart-mainnet.md
@@ -0,0 +1,111 @@
+---
+sidebar_position: 7
+description: Run a cluster on mainnet
+---
+
+# Run a DV on mainnet
+
+:::warning Charon is in an alpha state, and you should proceed only if you accept the risk, the [terms of use](https://obol.tech/terms.pdf), and have tested running a Distributed Validator on a testnet first.
+
+Distributed Validators created for goerli cannot be used on mainnet and vice versa, please take caution when creating, backing up, and activating mainnet validators. :::
+
+This section is intended for users who wish to run their Distributed Validator on Ethereum mainnet.
+
+### Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+
+### Steps
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) repo and `cd` into the directory.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+```
+
+2. If you have already cloned the repo, make sure that it is [up-to-date](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.0/int/quickstart/update/README.md).
+3. Copy the `.env.sample` file to `.env`
+
+```
+cp -n .env.sample .env
+```
+
+4. In your `.env` file, uncomment and set values for `NETWORK` & `LIGHTHOUSE_CHECKPOINT_SYNC_URL`
+
+```
+...
+# Overrides network for all the relevant services.
+NETWORK=mainnet
+...
+# Checkpoint sync url used by lighthouse to fast sync.
+LIGHTHOUSE_CHECKPOINT_SYNC_URL=https://mainnet.checkpoint.sigp.io/https://eth-clients.github.io/checkpoint-sync-endpoints/#mainnet
+...
+```
+
+Note that you can choose any checkpoint sync url from https://eth-clients.github.io/checkpoint-sync-endpoints/#mainnet.
+
+Your DV stack is now mainnet ready 🎉
+
+#### Remote mainnet beacon node
+
+:::warning Using a remote beacon node will impact the performance of your Distributed Validator and should be used sparingly. :::
+
+If you already have a mainnet beacon node running somewhere and you want to use that instead of running EL (`geth`) & CL (`lighthouse`) as part of the repo, you can disable these images. To do so, follow these steps:
+
+1. Copy the `docker-compose.override.yml.sample` file
+
+```
+cp -n docker-compose.override.yml.sample docker-compose.override.yml
+```
+
+2. Uncomment the `profiles: [disable]` section for both `geth` and `lighthouse`. The override file should now look like this
+
+```
+services:
+ geth:
+ # Disable geth
+ profiles: [disable]
+ # Bind geth internal ports to host ports
+ #ports:
+ #- 8545:8545 # JSON-RPC
+ #- 8551:8551 # AUTH-RPC
+ #- 6060:6060 # Metrics
+
+ lighthouse:
+ # Disable lighthouse
+ profiles: [disable]
+ # Bind lighthouse internal ports to host ports
+ #ports:
+ #- 5052:5052 # HTTP
+ #- 5054:5054 # Metrics
+...
+```
+
+3. Then, uncomment and set the `CHARON_BEACON_NODE_ENDPOINTS` variable in the `.env` file to your mainnet beacon node's URL
+
+```
+...
+# Connect to one or more external beacon nodes. Use a comma separated list excluding spaces.
+CHARON_BEACON_NODE_ENDPOINTS=
+...
+```
+
+#### Exit a mainnet distributed validator
+
+If you want to exit your mainnet validator, you need to uncomment and set the `EXIT_EPOCH` variable in the `.env` file
+
+```
+...
+# Cluster wide consistent exit epoch. Set to latest for fork version, see `curl $BEACON_NODE/eth/v1/config/fork_schedule`
+# Currently, the latest fork is capella (epoch: 194048)
+EXIT_EPOCH=194048
+...
+```
+
+Note that `EXIT_EPOCH` should be `194048` after the [shapella fork](https://blog.ethereum.org/2023/03/28/shapella-mainnet-announcement).
diff --git a/docs/versioned_docs/version-v0.17.0/int/quickstart/update.md b/docs/versioned_docs/version-v0.17.0/int/quickstart/update.md
new file mode 100644
index 0000000000..3187bbc0bf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/int/quickstart/update.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 6
+description: Update your DV cluster with the latest Charon release
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Update a DV
+
+It is highly recommended to upgrade your DV stack from time to time. This ensures that your node is secure, performant, up-to-date and you don't miss important hard forks.
+
+To do this, follow these steps:
+
+### Navigate to the node directory
+
+
+
+
+
+ cd charon-distributed-validator-node
+
+
+
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+
+
+
+### Pull latest changes to the repo
+```
+git pull
+```
+
+### Create (or recreate) your DV stack
+```
+docker compose up -d --build
+```
+:::warning
+If you run more than one node in a DV Cluster, please take caution upgrading them simultaneously. Particularly if you are updating or changing the validator client used or recreating disks. It is recommended to update nodes on a sequential basis to minimse liveness and safety risks.
+:::
+
+### Conflicts
+
+:::info
+You may get a `git conflict` error similar to this:
+:::
+```markdown
+error: Your local changes to the following files would be overwritten by merge:
+prometheus/prometheus.yml
+...
+Please commit your changes or stash them before you merge.
+```
+This is probably because you have made some changes to some of the files, for example to the `prometheus/prometheus.yml` file.
+
+To resolve this error, you can either:
+
+- Stash and reapply changes if you want to keep your custom changes:
+ ```
+ git stash # Stash your local changes
+ git pull # Pull the latest changes
+ git stash apply # Reapply your changes from the stash
+ ```
+ After reapplying your changes, manually resolve any conflicts that may arise between your changes and the pulled changes using a text editor or Git's conflict resolution tools.
+
+- Override changes and recreate configuration if you don't need to preserve your local changes and want to discard them entirely:
+ ```
+ git reset --hard # Discard all local changes and override with the pulled changes
+ docker-compose up -d --build # Recreate your DV stack
+ ```
+ After overriding the changes, you will need to recreate your DV stack using the updated files.
+ By following one of these approaches, you should be able to handle Git conflicts when pulling the latest changes to your repository, either preserving your changes or overriding them as per your requirements.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.17.0/intro.md b/docs/versioned_docs/version-v0.17.0/intro.md
new file mode 100644
index 0000000000..10a81b9143
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 20 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.17.0/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.17.0/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..2cc857f2c9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://ethresear.ch/t/0x03-withdrawal-credentials-simple-eth1-triggerable-withdrawals/10021) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programmatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.17.0/sc/README.md b/docs/versioned_docs/version-v0.17.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.17.0/sec/README.md b/docs/versioned_docs/version-v0.17.0/sec/README.md
new file mode 100644
index 0000000000..aeb3b02cce
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/sec/README.md
@@ -0,0 +1,2 @@
+# sec
+
diff --git a/docs/versioned_docs/version-v0.17.0/sec/bug-bounty.md b/docs/versioned_docs/version-v0.17.0/sec/bug-bounty.md
new file mode 100644
index 0000000000..528a9c871a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/sec/bug-bounty.md
@@ -0,0 +1,95 @@
+---
+description: Bug Bounty Policy
+sidebar_position: 3
+---
+
+# Obol Bug Bounty
+
+## Overview
+Obol Labs is committed to ensuring the security of our distributed validator software and services. As part of our commitment to security, we have established a bug bounty program to encourage security researchers to report vulnerabilities in our software and services to us so that we can quickly address them.
+
+## Eligibility
+To participate in the Bug Bounty Program you must:
+- Not be a resident of any country that does not allow participation in these types of programs
+- Be at least 14 years old and have legal capacity to agree to these terms and participate in the Bug Bounty Program
+- Have permission from your employer to participate
+- Not be (for the previous 12 months) an Obol Labs employee, immediate family member of an Obol employee, Obol contractor, or Obol service provider.
+
+## Scope
+The bug bounty program applies to software and services that are built by Obol. Only submissions under the following domains are eligible for rewards:
+- Charon DVT Middleware
+- DV Launchpad
+- Obol’s Public API
+- Obol’s Smart Contracts and the contracts they depend on.
+- Obol’s Public Relay
+
+Additionally, all vulnerabilities that require or are related to the following are out of scope:
+- Social engineering
+- Rate Limiting (Non-critical issues)
+- Physical security
+- Non-security-impacting UX issues
+- Vulnerabilities or weaknesses in third party applications that integrate with Obol
+- The Obol website or the Obol infrastructure in general is NOT part of this bug bounty program.
+
+## Rules
+- Bug has not been publicly disclosed
+- Vulnerabilities that have been previously submitted by another contributor or already known by the Obol development team are not eligible for rewards
+- The size of the bounty payout depends on the assessment of the severity of the exploit. Please refer to the rewards section below for additional details
+- Bugs must be reproducible in order for us to verify the vulnerability. Submissions with a working proof of concept is necessary
+- Rewards and the validity of bugs are determined by the Obol security team and any payouts are made at their sole discretion
+- Terms and conditions of the Bug Bounty program can be changed at any time at the discretion of Obol
+- Details of any valid bugs may be shared with complementary protocols utilised in the Obol ecosystem in order to promote ecosystem cohesion and safety.
+
+## Rewards
+The rewards for participating in our bug bounty program will be based on the severity and impact of the vulnerability discovered. We will evaluate each submission on a case-by-case basis, and the rewards will be at Obol’s sole discretion.
+
+### Low: up to $500
+A Low-level vulnerability is one that has a limited impact and can be easily fixed. Unlikely to have a meaningful impact on availability, integrity, and/or loss of funds.
+- Low impact, medium likelihood
+- Medium impact, low likelihood
+Examples:
+- Attacker can sometimes put a charon node in a state that causes it to drop one out of every one hundred attestations made by a validator
+
+### Medium: up to $1,000
+A Medium-level vulnerability is one that has a moderate impact and requires a more significant effort to fix. Possible to have an impact on availability, integrity, and/or loss of funds.
+- High impact, low likelihood
+- Medium impact, medium likelihood
+- Low impact, high likelihood
+Examples:
+- Attacker can successfully conduct eclipse attacks on the cluster nodes with peer-ids with 4 leading zero bytes.
+
+### High: up to $2,500
+A High-level vulnerability is one that has a significant impact on the security of the system and requires a significant effort to fix. Likely to have impact on availability, integrity, and/or loss of funds.
+- High impact, medium likelihood
+- Medium impact, high likelihood
+Examples:
+- Attacker can successfully partition the cluster and exceeding its threshold.
+
+### Critical: up to $5,000
+A Critical-level vulnerability is one that has a severe impact on the security of the system and requires immediate attention to fix. Highly likely to have a material impact on availability, integrity, and/or loss of funds.
+- High impact, high likelihood
+Examples:
+- Attacker can successfully conduct remote code execution in charon client.
+
+We may offer rewards in the form of cash, merchandise, or recognition. We will only award one reward per vulnerability discovered, and we reserve the right to deny a reward if we determine that the researcher has violated the terms and conditions of this policy.
+
+## Submission process
+Please email security@obol.tech
+
+Your report should include the following information:
+- Description of the vulnerability and its potential impact
+- Steps to reproduce the vulnerability
+- Proof of concept code, screenshots, or other supporting documentation
+- Your name, email address, and any contact information you would like to provide.
+Reports that do not include sufficient detail will not be eligible for rewards.
+
+## Disclosure Policy
+Obol Labs will disclose the details of the vulnerability and the researcher’s identity (with their consent) only after we have remediated the vulnerability and issued a fix. Researchers must keep the details of the vulnerability confidential until Obol Labs has acknowledged and remediated the issue.
+
+## Legal Compliance
+All participants in the bug bounty program must comply with all applicable laws, regulations, and policy terms and conditions. Obol will not be held liable for any unlawful or unauthorised activities performed by participants in the bug bounty program.
+
+We will not take any legal action against security researchers who discover and report security vulnerabilities in accordance with this bug bounty policy. We do, however, reserve the right to take legal action against anyone who violates the terms and conditions of this policy.
+
+## Non-Disclosure Agreement
+All participants in the bug bounty program will be required to sign a non-disclosure agreement (NDA) before they are given access to our software and services for testing purposes.
diff --git a/docs/versioned_docs/version-v0.17.0/sec/contact.md b/docs/versioned_docs/version-v0.17.0/sec/contact.md
new file mode 100644
index 0000000000..b4925395c2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/sec/contact.md
@@ -0,0 +1,9 @@
+---
+description: Security Contacts
+sidebar_position: 4
+---
+
+# Contacts
+Please email security@obol.tech to report a security incident, vulnerability, bug or inquire about Obol's security.
+
+Also, visit the [obol security repo](https://github.com/ObolNetwork/obol-security) for more details.
diff --git a/docs/versioned_docs/version-v0.17.0/sec/overview.md b/docs/versioned_docs/version-v0.17.0/sec/overview.md
new file mode 100644
index 0000000000..35c877fe1d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/sec/overview.md
@@ -0,0 +1,51 @@
+---
+description: Security Overview
+sidebar_position: 1
+---
+
+# Overview
+This page serves as an overview of the Obol Network from a security auditor’s point of view. It lists all of the projects that are intended to fall under the scope of the Obol Network project, as well as past audit reports, notable security bugs, and open security/privacy challenges in the Obol Network. You can think of this page as “a security auditor’s guide to Obol.”
+
+This page is updated quarterly. The last update was on 2023-03-21.
+
+## Table of Contents
+1. [Open Challenges](#open-challenges)
+2. [Core Public Goods](#core-public-goods)
+3. [List of Security Audits](#list-of-security-audits)
+
+## Open Challenges
+These are the “big picture” security challenges for Obol Network that are on our radar.
+
+### Validation of Contract Deployment and Deposit Data Relies Heavily on Launchpad
+
+The risks identified include the possibility of malicious contracts being deployed by attackers who compromise the Launchpad or an underlying dependency.
+
+Key concerns raised by the auditor:
+1. How does the group creator know the Launchpad deployed the correct contracts?
+2. How does the rest of the group know the creator deployed the contracts through the Launchpad?
+The current verification process relies on the independent verification performed by each group member during and after the cluster's setup phase. However, this process may not be sufficient, as most users lack the necessary expertise to verify the source code accurately.
+
+The primary risk is that users may deposit with malicious withdrawal or fee recipient credentials, potentially allowing an attacker to steal the entire withdrawal amount once the cluster exits.
+
+The audit also mentions similar risks in validating deposit data but lacks clarity on the Obol stack's specific part that generates the deposit data/transaction.
+
+The auditor suggests that the mitigation for these risks would involve a more thorough and reliable verification process, although further details are not provided in the summary.
+
+### Social Consensus, aka “Who sends the 32 ETH?”
+
+Obol allows multiple operators to act as a single validator, requiring a total of 32 ETH for depositing to the beacon chain. Currently, the process relies on trust and social consensus, where the group decides on individual contributions and trusts someone to complete the deposit process correctly without misusing the funds.
+
+While the initial launch of Obol is limited to a small group, for a future public release, the deposit process should be simpler and less reliant on trust to ensure security and user confidence.
+
+## Core Public Goods
+The Obol Network consists of four core public goods:
+
+- The Distributed Validator [Launchpad](https://docs.obol.tech/docs/dvl/intro), a [User Interface](https://goerli.launchpad.obol.tech/) for bootstrapping Distributed Validators
+- [Charon](https://docs.obol.tech/docs/charon/intro), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+- [Obol Managers](https://docs.obol.tech/docs/sc/introducing-obol-managers), a set of solidity smart contracts for the formation of Distributed Validators
+- [Obol Testnets](https://docs.obol.tech/docs/testnet), a set of on-going public incentivized testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## List of Security Audits
+
+### 2023
+The completed audits reports are linked [here](https://github.com/ObolNetwork/obol-security/tree/main/audits).
diff --git a/docs/versioned_docs/version-v0.17.0/sec/roadmap.md b/docs/versioned_docs/version-v0.17.0/sec/roadmap.md
new file mode 100644
index 0000000000..03dbf33df2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/sec/roadmap.md
@@ -0,0 +1,11 @@
+---
+description: Security Roadmap
+sidebar_position: 2
+---
+
+# Roadmap
+
+## Upcoming Audits
+1. Solidity smart contracts audit - Date TBD
+## Penetration Test
+1. Launchpad and public APIs - September 2023
diff --git a/docs/versioned_docs/version-v0.17.0/testnet.md b/docs/versioned_docs/version-v0.17.0/testnet.md
new file mode 100644
index 0000000000..28141b3de8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.0/testnet.md
@@ -0,0 +1,117 @@
+---
+sidebar_position: 6
+description: Obol testnets roadmap
+---
+
+# Testnets
+
+Over the coming quarters, Obol Labs has and will continue to coordinate and host a number of progressively larger testnets to help harden the Charon client and iterate on the key generation tooling.
+
+The following is a breakdown of the intended testnet roadmap, the features that are to be completed by each testnet, and their target start date and duration.
+
+# Testnets
+
+- [x] [Dev Net 1](#devnet-1)
+- [x] [Dev Net 2](#devnet-2)
+- [x] [Athena Public Testnet 1](#athena-public-testnet-1)
+- [x] [Bia Public Testnet 2](#bia-public-testnet-2)
+
+## Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using `docker compose` to spin up 4 Charon clients and 4 different validator clients on a single machine, with a remote consensus client. The keys were created locally in Charon and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+- A single user completes a first tutorial alone, using `docker compose` to spin up 4 Charon clients on a single machine, with a remote consensus client. The keys are created locally in Charon and activated with the existing launchpad.
+- Prove that the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works.
+- Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to Charon running across a network.
+
+## Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows **together** for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+
+This devnet was the first time `charon dkg` was tested with users. A core focus of this devnet was to collect network performance data.
+
+This was also the first time Charon was run in variable, non-virtual networks (i.e. the real internet).
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+- Groups of 4 testers complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients, each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+- Operators avoid exposing Charon to the public internet on a static IP address through the use of Obol-hosted relay nodes.
+- Users test `charon dkg`. The launchpad is not used, and this dkg is triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+- Effective collection of network performance data, to enable gathering even higher signal performance data at scale during public testnets.
+- Block proposals are in place.
+
+## Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 was to get distributed validators into the hands of the wider Obol Community for the first time. The core focus of this testnet was the onboarding experience.
+
+The core output from this testnet was a significant number of public cluster running and public feedback collected.
+
+This was an unincentivized testnet and formed the basis for us to figure out a Sybil resistance mechanism.
+
+**Participants:** Obol Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Completed date:** October 2022
+
+**Duration:** 2 weeks cluster setup, 8 weeks operation
+
+**Goals:**
+
+- Get distributed validators into the hands of the Obol Early Community for the first time.
+- Create the first public onboarding experience and gather feedback. This is the first time we need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) as possible.
+- Make deploying Ethereum validator nodes accessible using the CLI.
+- Generate a backlog of bugs, feature requests, platform requests and integration requests.
+
+## Bia Public Testnet 2
+
+This second public testnet intends to take the learning from Athena and scale the network by engaging both the wider at-home validator community and professional operators. This is the first time users are setting up DVs using the DV launchpad.
+
+This testnet is also important for learning the conditions Charon will be subjected to in production. A core output of this testnet is a large number of autonomous public DV clusters running and building up the Obol community with technical ambassadors.
+
+**Participants:** Obol Community, Ethereum staking community
+
+**State:** MVP
+
+**Network:** Görli
+
+**Target Completed date:** March 2023
+
+**Duration:** 2 weeks cluster setup, 4-8 weeks operation
+
+**Goals:**
+
+- Engage the wider Solo and Professional Ethereum Staking Community.
+- Get integration feedback.
+- Build confidence in Charon after running DVs on an Ethereum testnet.
+- Learn about the conditions Charon will be subjected to in production.
+- Distributed Validator returns are competitive versus single validator clients.
+- Make deploying Ethereum validator nodes accessible using the DV Launchpad.
+- Build comprehensive guides for various profiles to spin up DVs with minimal supervision from the core team.
+
diff --git a/docs/versioned_docs/version-v0.17.1/README.md b/docs/versioned_docs/version-v0.17.1/README.md
new file mode 100644
index 0000000000..a4efbebbad
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/README.md
@@ -0,0 +1,2 @@
+# version-v0.17.1
+
diff --git a/docs/versioned_docs/version-v0.17.1/cg/README.md b/docs/versioned_docs/version-v0.17.1/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.17.1/cg/bug-report.md b/docs/versioned_docs/version-v0.17.1/cg/bug-report.md
new file mode 100644
index 0000000000..9a10b3b553
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hindrance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualize the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behavior
+
+
+## Current Behavior
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.17.1/cg/feedback.md b/docs/versioned_docs/version-v0.17.1/cg/feedback.md
new file mode 100644
index 0000000000..76042e28aa
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/cg/feedback.md
@@ -0,0 +1,5 @@
+# Feedback
+
+If you have followed our quickstart guides, and whether you succeeded or failed at running the distributed validator successfully, we would like to hear your feedback on the process and where you encountered difficulties.
+- Please let us know by joining and posting on our [Discord](https://discord.gg/n6ebKsX46w).
+- Also, feel free to add issues to our [GitHub repos](https://github.com/ObolNetwork).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.17.1/charon/README.md b/docs/versioned_docs/version-v0.17.1/charon/README.md
new file mode 100644
index 0000000000..44b46f1797
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/charon/README.md
@@ -0,0 +1,2 @@
+# charon
+
diff --git a/docs/versioned_docs/version-v0.17.1/charon/charon-cli-reference.md b/docs/versioned_docs/version-v0.17.1/charon/charon-cli-reference.md
new file mode 100644
index 0000000000..dba31ac88b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/charon/charon-cli-reference.md
@@ -0,0 +1,382 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+sidebar_position: 5
+---
+
+# CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.17.1`](https://github.com/ObolNetwork/charon/releases/tag/v0.17.1). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+The following are the top-level commands available to use.
+
+```markdown
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ alpha Alpha subcommands provide early access to in-development features
+ combine Combines the private key shares of a distributed validator cluster into a set of standard validator private keys.
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Prints a new ENR for this node
+ help Help about any command
+ relay Start a libp2p relay server
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+## The `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+```
+
+### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+```
+
+### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster-lock.json` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --cluster-dir string The target folder to create the cluster in. (default "./")
+ --definition-file string Optional path to a cluster definition file or an HTTP URL. This overrides all other configuration flags.
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for cluster
+ --insecure-keys Generates insecure keystore files. This should never be used. It is not supported on mainnet.
+ --keymanager-addresses strings Comma separated list of keymanager URLs to import validator key shares to. Note that multiple addresses are required, one for each node in the cluster, with node0's keyshares being imported to the first address, node1's keyshares to the second, and so on.
+ --keymanager-auth-tokens strings Authentication bearer tokens to interact with the keymanager URLs. Don't include the "Bearer" symbol, only include the api-token.
+ --name string The cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky.
+ --nodes int The number of charon nodes in the cluster. Minimum is 3.
+ --num-validators int The number of distributed validators needed in the cluster.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, keycast, frost (default "default")
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky. (default "mainnet")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings [REQUIRED] Comma-separated list of each operator's Charon ENR address.
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+## The `dkg` subcommand
+
+### Performing a DKG Ceremony
+
+The `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit data for each new distributed validator. The command outputs the `cluster-lock.json` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file or an HTTP URL. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --keymanager-address string The keymanager URL to import validator keyshares.
+ --keymanager-auth-token string Authentication bearer token to interact with keymanager API. Don't include the "Bearer" symbol, only include the api-token.
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --shutdown-delay duration Graceful shutdown delay. (default 1s)
+```
+
+## The `run` subcommand
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster-lock.json` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs.
+ --builder-api Enables the builder api. Will only produce builder blocks. Builder API must also be enabled on the validator client. Beacon node must be connected to a builder-relay to access the builder network.
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing.
+ --jaeger-service string Service name used for jaeger tracing. (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --manifest-file string The path to the cluster manifest file. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-manifest.pb")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof). (default "127.0.0.1:3620")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --private-key-file-lock Enables private key locking to prevent multiple instances using the same key.
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-beacon-mock-fuzz Configures simnet beaconmock to return fuzzed responses.
+ --simnet-slot-duration duration Configures slot duration in simnet beacon mock. (default 1s)
+ --simnet-validator-keys-dir string The directory containing the simnet validator key shares. (default ".charon/validator_keys")
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --synthetic-block-proposals Enables additional synthetic block proposal duties. Used for testing of rare duties.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API. (default "127.0.0.1:3600")
+```
+
+## The `combine` subcommand
+
+### Combine distributed validator keyshares into a single Validator key
+
+The `combine` command combines many validator keyshares into a single Ethereum validator key.
+
+To run this command, one needs all the node operator's `.charon` directories, which need to be organized in the following way:
+
+```shell
+validators-to-be-combined/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+That is, each operator '.charon' directory must be placed in a parent directory, and renamed.
+
+Note that all validator keys are required for the successful execution of this command.
+
+If for example the lock file defines 2 validators, each `validator_keys` directory must contain exactly 4 files, a JSON and TXT file for each validator.
+
+Those files must be named with an increasing index associated with the validator in the lock file, starting from 0.
+
+The chosen name doesn't matter, as long as it's different from `.charon`.
+
+At the end of the process `combine` will create a new set of directories containing one validator key each, named after its public key:
+
+```shell
+validators-to-be-combined/
+├── 0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd # contains private key
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── 0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106 # contains private key
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+By default, the `combine` command will refuse to overwrite any private key that is already present in the destination directory.
+
+To force the process, use the `--force` flag.
+
+```markdown
+charon combine --help
+Combines the private key shares from a threshold of operators in a distributed validator cluster into a set of validator private keys that can be imported into a standard Ethereum validator client.
+
+Warning: running the resulting private keys in a validator alongside the original distributed validator cluster *will* result in slashing.
+
+Usage:
+ charon combine [flags]
+
+Flags:
+ --cluster-dir string Parent directory containing a number of .charon subdirectories from the required threshold of nodes in the cluster. (default ".charon/cluster")
+ --force Overwrites private keys with the same name if present.
+ -h, --help Help for combine
+ --no-verify Disables cluster definition and lock file verification.
+ --output-dir string Directory to output the combined private keys to. (default "./validator_keys")
+```
+
+## Host a relay
+
+Relays run a libp2p [circuit relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) server that allows charon clusters to perform peer discovery and for charon clients behind NAT gateways to be communicated with. If you want to self-host a relay for your cluster(s) the following command will start one.
+
+```markdown
+charon relay --help
+Starts a libp2p relay that charon nodes can use to bootstrap their p2p cluster
+
+Usage:
+ charon relay [flags]
+
+Flags:
+ --auto-p2pkey Automatically create a p2pkey (secp256k1 private key used for p2p authentication and ENR) if none found in data directory. (default true)
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for relay
+ --http-address string Listening address (ip and port) for the relay http server serving runtime ENR. (default "127.0.0.1:3640")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --monitoring-address string Listening address (ip and port) for the prometheus and pprof monitoring http server. (default "127.0.0.1:3620")
+ --p2p-advertise-private-addresses Enable advertising of libp2p auto-detected private addresses. This doesn't affect manually provided p2p-external-ip/hostname.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-max-connections int Libp2p maximum number of peers that can connect to this relay. (default 16384)
+ --p2p-max-reservations int Updates max circuit reservations per peer (each valid for 30min) (default 512)
+ --p2p-relay-loglevel string Libp2p circuit relay log level. E.g., debug, info, warn, error.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+```
diff --git a/docs/versioned_docs/version-v0.17.1/charon/cluster-configuration.md b/docs/versioned_docs/version-v0.17.1/charon/cluster-configuration.md
new file mode 100644
index 0000000000..d05f53dc3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/charon/cluster-configuration.md
@@ -0,0 +1,161 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+sidebar_position: 3
+---
+
+# Cluster configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client or cluster.
+
+A charon cluster is configured in two steps:
+
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+In the case of a solo operator running a cluster, the [`charon create cluster`](./charon-cli-reference.md#create-a-full-cluster-locally) command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+## Cluster Definition File
+
+The `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+### Using the CLI
+
+The [`charon create dkg`](./charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) command is used to create the `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The schema of the `cluster-definition.json` is defined as:
+
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "creator": {
+ "address": "0x123..abfc", //ETH1 address of the creator
+ "config_signature": "0x123654...abcedf" // EIP712 Signature of config_hash using creator privkey
+ },
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "config_signature": "0x123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "0x123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.2.0", // Schema version
+ "timestamp": "2022-01-01T12:00:00+00:00", // Creation timestamp
+ "num_validators": 2, // Number of distributed validators to be created in cluster-lock.json
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "validators": [
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ },
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ }
+ ],
+ "dkg_algorithm": "foo_dkg_v1", // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "0xabcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "0xabcdef...abcedef" // Final hash of all fields
+}
+```
+
+### Using the DV Launchpad
+
+- A [`leader/creator`](../int/quickstart/group/index.md), that wishes to coordinate the creation of a new Distributed Validator Cluster navigates to the launchpad and selects "Create new Cluster"
+- The `leader/creator` uses the user interface to configure all of the important details about the cluster including:
+ - The `Withdrawal Address` for the created validators
+ - The `Fee Recipient Address` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialized and merklized to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that there is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the `leader/creator` is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralized backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralization of the launchpad.)
+
+## Cluster Lock File
+
+The `cluster-lock.json` has the following schema:
+
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equal to cluster_definition.num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "abc...fed", "cfd...bfe"], // Length equal to cluster_definition.operators
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
+
+## Cluster Size and Resilience
+
+The cluster size (the number of nodes/operators in the cluster) determines the resilience of the cluster; its ability remain operational under diverse failure scenarios.
+Larger clusters can tolerate more faulty nodes.
+However, increased cluster size implies higher operational costs and potential network latency, which may negatively affect performance
+
+Optimal cluster size is therefore trade-off between resilience (larger is better) vs cost-efficiency and performance (smaller is better).
+
+Cluster resilience can be broadly classified into two categories:
+ - **[Byzantine Fault Tolerance (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault)** - the ability to tolerate nodes that are actively trying to disrupt the cluster.
+ - **[Crash Fault Tolerance (CFT)](https://en.wikipedia.org/wiki/Fault_tolerance)** - the ability to tolerate nodes that have crashed or are otherwise unavailable.
+
+Different cluster sizes tolerate different counts of byzantine vs crash nodes.
+In practice, hardware and software crash relatively frequently, while byzantine behaviour is relatively uncommon.
+However, Byzantine Fault Tolerance is crucial for trust minimised systems like distributed validators.
+Thus, cluster size can be chosen to optimise for either BFT or CFT.
+
+The table below lists different cluster sizes and their characteristics:
+ - `Cluster Size` - the number of nodes in the cluster.
+ - `Threshold` - the minimum number of nodes that must collaborate to reach consensus quorum and to create signatures.
+ - `BFT #` - the maximum number of byzantine nodes that can be tolerated.
+ - `CFT #` - the maximum number of crashed nodes that can be tolerated.
+
+| Cluster Size | Threshold | BFT # | CFT # | Note |
+|--------------|-----------|-------|-------|------------------------------------|
+| 1 | 1 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 2 | 2 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 3 | 2 | 0 | 1 | ⚠️ Warning: CFT but not BFT! |
+| 4 | 3 | 1 | 1 | ✅ CFT and BFT optimal for 1 faulty |
+| 5 | 4 | 1 | 1 | |
+| 6 | 4 | 1 | 2 | ✅ CFT optimal for 2 crashed |
+| 7 | 5 | 2 | 2 | ✅ BFT optimal for 2 byzantine |
+| 8 | 6 | 2 | 2 | |
+| 9 | 6 | 2 | 3 | ✅ CFT optimal for 3 crashed |
+| 10 | 7 | 3 | 3 | ✅ BFT optimal for 3 byzantine |
+| 11 | 8 | 3 | 3 | |
+| 12 | 8 | 3 | 4 | ✅ CFT optimal for 4 crashed |
+| 13 | 9 | 4 | 4 | ✅ BFT optimal for 4 byzantine |
+| 14 | 10 | 4 | 4 | |
+| 15 | 10 | 4 | 5 | ✅ CFT optimal for 5 crashed |
+| 16 | 11 | 5 | 5 | ✅ BFT optimal for 5 byzantine |
+| 17 | 12 | 5 | 5 | |
+| 18 | 12 | 5 | 6 | ✅ CFT optimal for 6 crashed |
+| 19 | 13 | 6 | 6 | ✅ BFT optimal for 6 byzantine |
+| 20 | 14 | 6 | 6 | |
+| 21 | 14 | 6 | 7 | ✅ CFT optimal for 7 crashed |
+| 22 | 15 | 7 | 7 | ✅ BFT optimal for 7 byzantine |
+
+The table above is determined by the QBFT consensus algorithm with the
+following formulas from [this](https://arxiv.org/pdf/1909.10194.pdf) paper:
+
+```
+n = cluster size
+
+Threshold: min number of honest nodes required to reach quorum given size n
+Quorum(n) = ceiling(2n/3)
+
+BFT #: max number of faulty (byzantine) nodes given size n
+f(n) = floor((n-1)/3)
+
+CFT #: max number of unavailable (crashed) nodes given size n
+crashed(n) = n - Quorum(n)
+```
diff --git a/docs/versioned_docs/version-v0.17.1/charon/dkg.md b/docs/versioned_docs/version-v0.17.1/charon/dkg.md
new file mode 100644
index 0000000000..102c0c41e2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/charon/dkg.md
@@ -0,0 +1,74 @@
+---
+sidebar_position: 2
+description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key
+ Generation (DKG) Ceremony.
+---
+
+# Distributed Key Generation
+
+## Overview
+
+A [**distributed validator key**](../int/key-concepts.md#distributed-validator-key) is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together (4 randomly chosen points on a graph don't all necessarily sit on the same order three curve). To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a [**distributed key generation ceremony**](../int/key-concepts.md#distributed-validator-key-generation-ceremony).
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/charon/cluster-configuration/README.md).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+* An `Operator` is identified by their Ethereum address. They will sign a message with this address to authorize their charon client to take part in the DKG ceremony.
+* A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p/tree/master/p2p/security/noise). These keys need to be created (and backed up) by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This cluster definition specifies the intended cluster configuration before keys have been created in a distributed key generation ceremony. The `cluster-definition.json` file can be created with the help of the [Distributed Validator Launchpad](cluster-configuration.md#using-the-dv-launchpad) or via the [CLI](cluster-configuration.md#using-the-cli).
+
+## Carrying out the DKG ceremony
+
+Once all participants have signed the cluster definition, they can load the `cluster-definition` file into their charon client, and the client will attempt to complete the DKG.
+
+Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to relays that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen by a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+## Backing up the ceremony artifacts
+
+At the end of a DKG ceremony, each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+* **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+* **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+* **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favor of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+* Do the public key shares combine together to form the group public key?
+ * This can be checked on chain as it does not require a pairing operation
+ * This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+* Do the created BLS public keys attest to their `cluster_definition_hash`?
+ * This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ * If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ * As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+* Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ * VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ * PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ * A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ * Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/charon/cluster-configuration/README.md).
diff --git a/docs/versioned_docs/version-v0.17.1/charon/intro.md b/docs/versioned_docs/version-v0.17.1/charon/intro.md
new file mode 100644
index 0000000000..1c410fda61
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/charon/intro.md
@@ -0,0 +1,69 @@
+---
+sidebar_position: 1
+description: Charon - The Distributed Validator Client
+---
+
+# Introduction
+
+This section introduces and outlines the Charon _\[kharon]_ middleware, Obol's implementation of DVT. Please see the [key concepts](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/key-concepts/README.md) section as background and context.
+
+## What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and its connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+
+
+## Charon Architecture
+
+Charon is an Ethereum proof of stake distributed validator (DV) client. Like any validator client, its main purpose is to perform validation duties for the Beacon Chain, primarily attestations and block proposals. The beacon client handles a lot of the heavy lifting, leaving the validator client to focus on fetching duty data, signing that data, and submitting it back to the beacon client.
+
+Charon is designed as a generic event-driven workflow with different components coordinating to perform validation duties. All duties follow the same flow, the only difference being the signed data. The workflow can be divided into phases consisting of one or more components:
+
+
+
+### Determine **when** duties need to be performed
+
+The beacon chain is divided into [slots](https://eth2book.info/bellatrix/part3/config/types/#slot) and [epochs](https://eth2book.info/bellatrix/part3/config/types/#epoch), which divides it into deterministically fixed-size time chunks. The first step is to determine when (which slot/epoch) duties need to be performed. This is done by the `scheduler` component. It queries the beacon node to detect which validators defined in the cluster lock are active, and what duties they need to perform for the upcoming epoch and slots. When such a slot starts, the `scheduler` emits an event indicating which validator needs to perform what duty.
+
+### Fetch and come to consensus on **what** data to sign
+
+A DV cluster consists of multiple operators each provided with one of the M-of-N threshold BLS private key shares per validator. The key shares are imported into the validator clients which produce partial signatures. Charon threshold aggregates these partial signatures before broadcasting them to the Beacon Chain. _But to threshold aggregate partial signatures, each validator must sign the same data._ The cluster must therefore coordinate and come to a consensus on what data to sign.
+
+`Fetcher` fetches the unsigned duty data from the beacon node upon receiving an event from `Scheduler`.\
+For attestations, this is the unsigned attestation, for block proposals, this is the unsigned block.
+
+The `Consensus` component listens to events from Fetcher and starts a [QBFT](https://docs.goquorum.consensys.net/configure-and-manage/configure/consensus-protocols/qbft/) consensus game with the other Charon nodes in the cluster for that specific duty and slot. When consensus is reached, the resulting unsigned duty data is stored in the `DutyDB`.
+
+### **Wait** for the VC to sign
+
+Charon is a **middleware** distributed validator client. That means Charon doesn’t have access to the validator private key shares and cannot sign anything on demand. Instead, operators import the key shares into industry-standard validator clients (VC) that are configured to connect to their local Charon client instead of their local Beacon node directly.
+
+Charon, therefore, serves the [Ethereum Beacon Node API](https://ethereum.github.io/beacon-APIs/#/) from the `ValidatorAPI` component and intercepts some endpoints while proxying other endpoints directly to the upstream Beacon node.
+
+The VC queries the `ValidatorAPI` for unsigned data which is retrieved from the `DutyDB`. It then signs it and submits it back to the `ValidatorAPI` which stores it in the `PartialSignatureDB`.
+
+### **Share** partial signatures
+
+The `PartialSignatureDB` stores the partially signed data submitted by the local Charon client’s VC. But it also stores all the partial signatures submitted by the VCs of other peers in the cluster. This is achieved by the `PartialSignatureExchange` component that exchanges partial signatures between all peers in the cluster. All charon clients, therefore, store all partial signatures the cluster generates.
+
+### **Threshold Aggregate** partial signatures
+
+The `SignatureAggregator` is invoked as soon as sufficient (any M of N) partial signatures are stored in the `PartialSignatureDB`. It performs BLS threshold aggregation of the partial signatures resulting in a final signature that is valid for the beacon chain.
+
+### **Broadcast** final signature
+
+Finally, the `Broadcaster` component broadcasts the final threshold aggregated signature to the Beacon client, thereby completing the duty.
+
+### Ports
+
+The following is an outline of the services that can be exposed by charon.
+
+* **:3600** - The validator REST API. This is the port that serves the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/). This is the port validator clients should talk to instead of their standard consensus client REST API port. Charon subsequently proxies these requests to the upstream consensus client specified by `--beacon-node-endpoints`.
+* **:3610** - Charon P2P port. This is the port that charon clients use to communicate with one another via TCP. This endpoint should be port-forwarded on your router and exposed publicly, preferably on a static IP address. This IP address should then be set on the charon run command with `--p2p-external-ip` or `CHARON_P2P_EXTERNAL_IP`.
+* **:3620** - Monitoring port. This port hosts a webserver that serves prometheus metrics on `/metrics`, a readiness endpoint on `/readyz` and a liveness endpoint on `/livez`, and a pprof server on `/debug/pprof`. This port should not be exposed publicly.
+
+## Getting started
+
+For more information on running charon, take a look at our [Quickstart Guides](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.17.1/charon/networking.md b/docs/versioned_docs/version-v0.17.1/charon/networking.md
new file mode 100644
index 0000000000..076981a5c4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/charon/networking.md
@@ -0,0 +1,84 @@
+---
+sidebar_position: 4
+description: Networking
+---
+
+# Charon networking
+
+## Overview
+
+This document describes Charon's networking model which can be divided into two parts: the [_internal validator stack_](networking.md#internal-validator-stack) and the [_external p2p network_](networking.md#external-p2p-network).
+
+## Internal Validator Stack
+
+\
+
+
+Charon is a middleware DVT client and is therefore connected to an upstream beacon node and a downstream validator client is connected to it. Each operator should run the whole validator stack (all 4 client software types), either on the same machine or on different machines. The networking between the nodes should be private and not exposed to the public internet.
+
+Related Charon configuration flags:
+
+* `--beacon-node-endpoints`: Connects Charon to one or more beacon nodes.
+* `--validator-api-address`: Address for Charon to listen on and serve requests from the validator client.
+
+## External P2P Network
+
+ The Charon clients in a DV cluster are connected to each other via a small p2p network consisting of only the clients in the cluster. Peer IP addresses are discovered via an external "relay" server. The p2p connections are over the public internet so the charon p2p port must be publicly accessible. Charon leverages the popular [libp2p](https://libp2p.io/) protocol.
+
+Related [Charon configuration flags](charon-cli-reference.md):
+
+* `--p2p-tcp-addresses`: Addresses for Charon to listen on and serve p2p requests.
+* `--p2p-relays`: Connect charon to one or more relay servers.
+* `--private-key-file`: Private key identifying the charon client.
+
+### LibP2P Authentication and Security
+
+Each charon client has a secp256k1 private key. The associated public key is encoded into the [cluster lock file](cluster-configuration.md#Cluster-Lock-File) to identify the nodes in the cluster. For ease of use and to align with the Ethereum ecosystem, Charon encodes these public keys in the [ENR format](https://eips.ethereum.org/EIPS/eip-778), not in [libp2p’s Peer ID format](https://docs.libp2p.io/concepts/fundamentals/peers/).
+
+:::warning Each Charon node's secp256k1 private key is critical for authentication and must be kept secure to prevent cluster compromise.
+
+Do not use the same key across multiple clusters, as this can lead to security issues.
+
+For more on p2p security, refer to [libp2p's article](https://docs.libp2p.io/concepts/security/security-considerations). :::
+
+Charon currently only supports libp2p tcp connections with [noise](https://noiseprotocol.org/) security and only accepts incoming libp2p connections from peers defined in the cluster lock.
+
+### LibP2P Relays and Peer Discovery
+
+Relays are simple libp2p servers that are publicly accessible supporting the [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) protocol. Circuit-relay is a libp2p transport protocol that routes traffic between two peers over a third-party “relay” peer.
+
+Obol hosts a publicly accessible relay at https://0.relay.obol.tech and will work with other organisations in the community to host alternatives Anyone can host their own relay server for their DV cluster.
+
+Each charon node knows which peers are in the cluster from the ENRs in the cluster lock file, but their IP addresses are unknown. By connecting to the same relay, nodes establish “relay connections” to each other. Once connected via relay they exchange their known public addresses via libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol. The relay connection is then upgraded to a direct connection. If a node’s public IP changes, nodes once again connect via relay, exchange the new IP, and then connect directly once again.
+
+Note that in order for two peers to discover each other, they must connect to the same relay. Cluster operators should therefore coordinate which relays to use.
+
+Libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol attempts to automatically detect the public IP address of a charon client without the need to explicitly configure it. If this however fails, the following two configuration flags can be used to explicitly set the publicly advertised address:
+
+* `--p2p-external-ip`: Explicitly sets the external IP address.
+* `--p2p-external-hostname`: Explicitly sets the external DNS host name.
+
+:::warning If a pair of charon clients are not publicly accessible, due to being behind a NAT, they will not be able to upgrade their relay connections to a direct connection. Even though this is supported, it isn’t recommended as relay connections introduce additional latency and reduced throughput and will result in decreased validator effectiveness and possible missed block proposals and attestations. :::
+
+Libp2p’s circuit-relay connections are end-to-end encrypted, even though relay servers accept connections between nodes from multiple different clusters, relays are merely routing opaque connections. And since Charon only accepts incoming connections from other peers in its cluster, the use of a relay doesn’t allow connections between clusters.
+
+Only the following three libp2p protocols are established between a charon node and a relay itself:
+
+* [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/): To establish relay e2e encrypted connections between two peers in a cluster.\
+
+* [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify): Auto-detection of public IP addresses to share with other peers in the cluster.\
+
+* [peerinfo](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfo.go): Exchanges basic application [metadata](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfopb/v1/peerinfo.proto) for improved operational metrics and observability.\
+
+
+All other charon protocols are only established between nodes in the same cluster.
+
+### Scalable Relay Clusters
+
+In order for a charon client to connect to a relay, it needs the relay's [multiaddr](https://docs.libp2p.io/concepts/fundamentals/addressing/) (containing its public key and IP address). But a single multiaddr can only point to a single relay server which can easily be overloaded if too many clusters connect to it. Charon therefore supports resolving a relay’s multiaddr via HTTP GET request. Since charon also includes the unique `cluster-hash` header in this request, the relay provider can use [consistent header-based load-balancing](https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_steering_header-based_routing) to map clusters to one of many relays using a single HTTP address.
+
+The relay supports serving its runtime public multiaddrs via its `--http-address` flag.
+
+E.g., https://0.relay.obol.tech is actually a load-balancer that routes HTTP requests to one of many relays based on the `cluster-hash` header returning the target relay’s multiaddr which the charon client then uses to connect to that relay.
+
+The charon `--p2p-relays` flag therefore supports both multiaddrs as well as HTTP URls.
diff --git a/docs/versioned_docs/version-v0.17.1/dvl/README.md b/docs/versioned_docs/version-v0.17.1/dvl/README.md
new file mode 100644
index 0000000000..1b694a8473
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/dvl/README.md
@@ -0,0 +1,2 @@
+# dvl
+
diff --git a/docs/versioned_docs/version-v0.17.1/dvl/intro.md b/docs/versioned_docs/version-v0.17.1/dvl/intro.md
new file mode 100644
index 0000000000..9eb7883d60
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/dvl/intro.md
@@ -0,0 +1,18 @@
+---
+sidebar_position: 1
+description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Introduction
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: [**The DV Launchpad**](https://goerli.launchpad.obol.tech/).
+
+## Getting started
+
+For more information on running charon in a UI friendly way through the DV Launchpad, take a look at our [Quickstart Guides](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.17.1/fr/README.md b/docs/versioned_docs/version-v0.17.1/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.17.1/fr/eth.md b/docs/versioned_docs/version-v0.17.1/fr/eth.md
new file mode 100644
index 0000000000..5d0e258f40
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/fr/eth.md
@@ -0,0 +1,49 @@
+# Ethereum and its Relationship with DVT
+
+Our goal for this page is to equip you with the foundational knowledge needed to actively contribute to the advancement of Obol while also directing you to valuable Ethereum and DVT related resources. Additionally, we will shed light on the intersection of DVT and Ethereum, offering curated articles and blog posts to enhance your understanding.
+
+## **Understanding Ethereum**
+
+To grasp the current landscape of Ethereum's PoS development, we encourage you to delve into the wealth of information available on the [Official Ethereum Website.](https://ethereum.org/en/learn/) The Ethereum website serves as a hub for all things Ethereum, catering to individuals at various levels of expertise, whether you're just starting your journey or are an Ethereum veteran. Here, you'll find a trove of resources that cater to diverse learning needs and preferences, ensuring that there's something valuable for everyone in the Ethereum community to discover.
+
+## **DVT & Ethereum**
+
+### Distributed Validator Technology
+
+> "Distributed validator technology (DVT) is an approach to validator security that spreads out key management and signing responsibilities across multiple parties, to reduce single points of failure, and increase validator resiliency.
+>
+> It does this by splitting the private key used to secure a validator across many computers organized into a "cluster". The benefit of this is that it makes it very difficult for attackers to gain access to the key, because it is not stored in full on any single machine. It also allows for some nodes to go offline, as the necessary signing can be done by a subset of the machines in each cluster. This reduces single points of failure from the network and makes the whole validator set more robust." _(ethereum.org, 2023)_
+
+#### Learn More About Distributed Validator technology from [The Official Ethereum Website](https://ethereum.org/en/staking/dvt/)
+
+### How Does DVT Improve Staking on Ethereum?
+
+If you haven’t yet heard, Distributed Validator Technology, or DVT, is the next big thing on The Merge section of the Ethereum roadmap. Learn more about this in our blog post: [What is DVT and How Does It Improve Staking on Ethereum?](https://blog.obol.tech/what-is-dvt-and-how-does-it-improve-staking-on-ethereum/)
+
+
+
+_**Vitalik's Ethereum Roadmap**_
+
+### Deep Dive Into DVT and Charon’s Architecture
+
+Minimizing correlation is vital when designing DVT as Ethereum Proof of Stake is designed to heavily punish correlated behavior. In designing Obol, we’ve made careful choices to create a trust-minimized and non-correlated architecture.
+
+[**Read more about Designing Non-Correlation Here**](https://blog.obol.tech/deep-dive-into-dvt-and-charons-architecture/)
+
+### Performance Testing Distributed Validators
+
+In our mission to help make Ethereum consensus more resilient and decentralised with distributed validators (DVs), it’s critical that we do not compromise on the performance and effectiveness of validators. Earlier this year, we worked with MigaLabs, the blockchain ecosystem observatory located in Barcelona, to perform an independent test to validate the performance of Obol DVs under different configurations and conditions. After taking a few weeks to fully analyse the results together with MigaLabs, we’re happy to share the results of these performance tests.
+
+[**Read More About The Performance Test Results Here**](https://blog.obol.tech/performance-testing-distributed-validators/)
+
+
+
+### More Resources
+
+* [Sorting out Distributed Validator Technology](https://medium.com/nethermind-eth/sorting-out-distributed-validator-technology-a6f8ca1bbce3)
+* [A tour of Verifiable Secret Sharing schemes and Distributed Key Generation protocols](https://medium.com/nethermind-eth/a-tour-of-verifiable-secret-sharing-schemes-and-distributed-key-generation-protocols-3c814e0d47e1)
+* [Threshold Signature Schemes](https://medium.com/nethermind-eth/threshold-signature-schemes-36f40bc42aca)
+
+#### References
+
+* ethereum.org. (2023). Distributed Validator Technology. \[online] Available at: https://ethereum.org/en/staking/dvt/ \[Accessed 25 Sep. 2023].
diff --git a/docs/versioned_docs/version-v0.17.1/fr/golang.md b/docs/versioned_docs/version-v0.17.1/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.17.1/int/README.md b/docs/versioned_docs/version-v0.17.1/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.17.1/int/faq/README.md b/docs/versioned_docs/version-v0.17.1/int/faq/README.md
new file mode 100644
index 0000000000..456ad9139a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/faq/README.md
@@ -0,0 +1,2 @@
+# faq
+
diff --git a/docs/versioned_docs/version-v0.17.1/int/faq/general.md b/docs/versioned_docs/version-v0.17.1/int/faq/general.md
new file mode 100644
index 0000000000..b0aacb3e89
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/faq/general.md
@@ -0,0 +1,68 @@
+---
+sidebar_position: 1
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+## General
+
+### Does Obol have a token?
+
+No. Distributed validators use only Ether.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/n6ebKsX46w) too.
+
+### Where does the name Charon come from?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) \[kharon] is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
+
+### What are the hardware requirements for running a Charon node?
+
+Charon alone uses negligible disk space of not more than a few MBs. However, if you are running your consensus client and execution client on the same server as charon, then you will typically need the same hardware as running a full Ethereum node:
+
+At minimum:
+
+* A CPU with 2+ physical cores (or 4 vCPUs)
+* 8GB RAM
+* 1.5TB+ free SSD disk space (for mainnet)
+* 10mb/s internet bandwidth
+
+Recommended specifications:
+
+* A CPU with 4+ physical cores
+* 16GB+ RAM
+* 2TB+ free disk on a high performance SSD (e.g. NVMe)
+* 25mb/s internet bandwidth
+
+For more hardware considerations, check out the [ethereum.org guides](https://ethereum.org/en/developers/docs/nodes-and-clients/run-a-node/#environment-and-hardware) which explores various setups and trade-offs, such as running the node locally or in the cloud.
+
+For now, Geth, Teku & Lighthouse clients are packaged within the docker compose file provided in the [quickstart guides](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/group/README.md), so you don't have to install anything else to run a cluster. Just make sure you give them some time to sync once you start running your node.
+
+### What is the difference between a node, a validator and a cluster?
+
+A node is a single instance of Ethereum EL+CL clients that can communicate with other nodes to maintain the Ethereum blockchain.
+
+A validator is a node that participates in the consensus process by verifying transactions and creating new blocks. Multiple validators can run from the same node.
+
+A cluster is a group of nodes that act together as one or several validators, which allows for a more efficient use of resources, reduces operational costs, and provides better reliability and fault tolerance.
+
+### Can I migrate an existing Charon node to a new machine?
+
+It is possible to migrate your Charon node to another machine running the same config by moving the `.charon` folder with its contents to your new machine. Make sure the EL and CL on the new machine are synced before proceeding to the move to minimize downtime.
+
+## Distributed Key Generation
+
+### What are the min and max numbers of operators for a Distributed Validator?
+
+Currently, the minimum is 4 operators with a threshold of 3.
+
+The threshold (aka quorum) corresponds to the minimum numbers of operators that need to be active for the validator(s) to be able to perform its duties. It is defined by the following formula `n-(ceil(n/3)-1)`. We strongly recommend using this default threshold in your DKG as it maximises liveness while maintaining BFT safety. Setting a 4 out of 4 cluster for example, would make your validator more vulnerable to going offline instead of less vulnerable. You can check the recommended threshold values for a cluster [here](../key-concepts.md#distributed-validator-threshold).
+
+## Debugging Errors in Logs
+
+You can check if the containers on your node are outputting errors by running `docker compose logs` on a machine with a running cluster.
+
+Diagnose some common errors and view their resolutions [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/faq/errors.mdx).
diff --git a/docs/versioned_docs/version-v0.17.1/int/faq/risks.md b/docs/versioned_docs/version-v0.17.1/int/faq/risks.md
new file mode 100644
index 0000000000..ff6229b66e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/faq/risks.md
@@ -0,0 +1,31 @@
+---
+sidebar_position: 3
+description: Centralization Risks and mitigation
+---
+
+# Centralization risks and mitigation
+
+## Risk: Obol hosting the relay infrastructure
+**Mitigation**: Self-host a relay
+
+One of the risks associated with Obol hosting the [LibP2P relays](../../charon/networking.md) infrastructure allowing peer discovery is that if Obol-hosted relays go down, peers won't be able to discover each other and perform the DKG. To mitigate this risk, external organizations and node operators can consider self-hosting a relay. This way, if Obol's relays go down, the clusters can still operate through other relays in the network.
+
+## Risk: Obol being able to update Charon code
+**Mitigation**: Pin specific docker versions or compile from source on a trusted commit
+
+Another risk associated with Obol is having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) running on the network which could introduce vulnerabilities or malicious code. To mitigate this risk, operators can consider pinning specific versions of the code that have been thoroughly tested and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community.
+
+## Risk: Obol hosting the DV Launchpad
+**Mitigation**: Use [`create cluster`](../../charon/charon-cli-reference.md) or [`create dkg`](../../charon/charon-cli-reference.md) locally and distribute the files manually
+
+Hosting the first Charon frontend, the [DV Launchpad](../../dvl/intro.md), on a centralized server could create a single point of failure, as users would have to rely on Obol's server to access the protocol. This could limit the decentralization of the protocol and could make it vulnerable to attacks or downtime. Obol hosting the launchpad on a decentralized network, such as IPFS is a first step but not enough. This is why the Charon code is open-source and contains a CLI interface to interact with the protocol locally.
+
+To mitigate the risk of launchpad failure, consider using the `create cluster` or `create dkg` commands locally and distributing the key shares files manually.
+
+
+## Risk: Obol going bust/rogue
+**Mitigation**: Use key recovery
+
+The final centralization risk associated with Obol is the possibility of the company going bankrupt or acting maliciously, which would lead to a loss of control over the network and potentially cause damage to the ecosystem. To mitigate this risk, Obol has implemented a key recovery mechanism. This would allow the clusters to continue operating and to retrieve full private keys even if Obol is no longer able to provide support.
+
+A guide to recombine key shares into a single private key can be accessed [here](../quickstart/advanced/quickstart-combine.md).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.17.1/int/key-concepts.md b/docs/versioned_docs/version-v0.17.1/int/key-concepts.md
new file mode 100644
index 0000000000..e43af9c9ae
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/key-concepts.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is possible with the use of **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes some of the single points of failure in validation. Should <33% of the participating nodes in a DV cluster go offline, the remaining active nodes can still come to consensus on what to sign and can produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes Geth, Lighthouse, Charon and Teku.
+
+### Execution Client
+
+
+
+An execution client (formerly known as an Eth1 client) specializes in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/charon/intro/README.md).
+
+### Validator Client
+
+
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Threshold
+
+The number of nodes in a cluster that needs to be online and honest for their distributed validators to be online is outlined in the following table.
+
+| Cluster Size | Threshold | Note |
+| :----------: | :-------: | --------------------------------------------- |
+| 4 | 3/4 | Minimum threshold |
+| 5 | 4/5 | |
+| 6 | 4/6 | Minimum to tolerate two offline nodes |
+| 7 | 5/7 | Minimum to tolerate two **malicious** nodes |
+| 8 | 6/8 | |
+| 9 | 6/9 | Minimum to tolerate three offline nodes |
+| 10 | 7/10 | Minimum to tolerate three **malicious** nodes |
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata. Read more about these ceremonies [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/charon/dkg/README.md).
diff --git a/docs/versioned_docs/version-v0.17.1/int/overview.md b/docs/versioned_docs/version-v0.17.1/int/overview.md
new file mode 100644
index 0000000000..b94bb3dbaf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mitigate these risks, community building and credible neutrality must be used as primary design principles.
+
+Obol is focused on scaling consensus by providing permissionless access to Distributed Validators (DV's). We believe that distributed validators will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption, the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing infrastructure.
+
+Similar to how roll-up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/dvl/intro/README.md), a [User Interface](https://goerli.launchpad.obol.tech/) for bootstrapping Distributed Validators
+* [Charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/charon/intro/README.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivized testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+### The Vision
+
+The road to decentralizing stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+
+
+The first version of distributed validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivization is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivization scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivization alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivization layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the delineation between the two.
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/README.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/README.md
new file mode 100644
index 0000000000..bd2483c7cf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/README.md
@@ -0,0 +1,2 @@
+# quickstart
+
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/activate-dv.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/activate-dv.md
new file mode 100644
index 0000000000..e1462d8db0
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/activate-dv.md
@@ -0,0 +1,56 @@
+---
+sidebar_position: 4
+description: Activate the Distributed Validator using the deposit contract
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Activate a DV
+
+:::warning
+Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+If you have successfully created a distributed validator and you are ready to activate it, congratulations! 🎉
+
+Once you have connected all of your charon clients together, synced all of your ethereum nodes such that the monitoring indicates that they are all healthy and ready to operate, **ONE operator** may proceed to deposit and activate the validator(s).
+
+The `deposit-data.json` to be used to deposit will be located in each operator's `.charon` folder. The copies across every node should be identical and any of them can be uploaded.
+
+:::warning
+If you are being given a `deposit-data.json` file that you didn't generate yourself, please take extreme care to ensure this operator has not given you a malicious `deposit-data.json` file that is not the one you expect. Cross reference the files from multiple operators if there is any doubt. Activating the wrong validator or an invalid deposit could result in complete theft or loss of funds.
+:::
+
+Use any of the following tools to deposit. Please use the third-party tools at your own risk and always double check the staking contract address.
+
+
+
+
+
+
+
+ - Obol Distributed Validator Launchpad (Soon)
+ - ethereum.org Staking Launchpad
+ - From a SAFE Multisig (Repeat these steps for every validator to deposit in your cluster)
+
+
+ - From the SAFE UI, click on
New Transaction
then Transaction Builder
to create a new custom transaction
+ - Enter the beacon chain contract for Deposit on mainnet - you can find it here
+ - Fill the transaction information
+ - Set amount to
32
in ETH
+ - Use your
deposit-data.json
to fill the required data : pubkey
,withdrawal credentials
,signature
,deposit_data_root
. Make sure to prefix the input with 0x
to format them in bytes
+ - Click on
Add transaction
+
+ - Click on
Create Batch
+ - Click on
Send Batch
, you can click on Simulate to check if the transaction will execute successfully
+ - Get the minimum threshold of signatures from the other addresses and execute the custom transaction
+
+
+
+
+
+The activation process can take a minimum of 16 hours, with the maximum time to activation being dictated by the length of the activation queue, which can be weeks.
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/README.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/README.md
new file mode 100644
index 0000000000..965416d689
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/README.md
@@ -0,0 +1,2 @@
+# advanced
+
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/adv-docker-configs.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/adv-docker-configs.md
new file mode 100644
index 0000000000..d14de53e8b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/adv-docker-configs.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 8
+description: Use advanced docker-compose features to have more flexibility and power to change the default configuration.
+---
+
+# Advanced Docker Configs
+
+:::info
+This section is intended for *docker power users*, i.e., for those who are familiar with working with `docker-compose` and want to have more flexibility and power to change the default configuration.
+:::
+
+We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in `docker-compose.yml` without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
+See [this](https://docs.docker.com/compose/extends/#multiple-compose-files) for more details.
+
+There are some additional compose files in [this repository](https://github.com/ObolNetwork/charon-distributed-validator-node/), `compose-debug.yml` and `docker-compose.override.yml.sample`, along-with the default `docker-compose.yml` file that you can use for this purpose.
+
+- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f compose-debug.yml up
+```
+
+- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
+
+- To use it, just copy the sample file to `docker-compose.override.yml` and customise it to your liking. Please create this file ONLY when you want to tweak something. This is because the default override file is empty and docker errors if you provide an empty compose file.
+
+```
+cp docker-compose.override.yml.sample docker-compose.override.yml
+
+# Tweak docker-compose.override.yml and then run docker compose up
+docker compose up
+```
+
+- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
+```
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/monitoring.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/monitoring.md
new file mode 100644
index 0000000000..fdbec169b9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/monitoring.md
@@ -0,0 +1,100 @@
+---
+sidebar_position: 4
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+# Getting Started Monitoring your Node
+
+Welcome to this comprehensive guide, designed to assist you in effectively monitoring your Charon cluster and nodes, and setting up alerts based on specified parameters.
+
+## Pre-requisites
+
+Ensure the following software are installed:
+
+- Docker: Find the installation guide for Ubuntu **[here](https://docs.docker.com/engine/install/ubuntu/)**
+- Prometheus: You can install it using the guide available **[here](https://prometheus.io/docs/prometheus/latest/installation/)**
+- Grafana: Follow this **[link](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)** to install Grafana
+
+## Import Pre-Configured Charon Dashboards
+
+- Navigate to the **[repository](https://github.com/ObolNetwork/monitoring/tree/main/dashboards)** that contains a variety of Grafana dashboards. For this demonstration, we will utilize the Charon Dashboard json.
+
+- In your Grafana interface, create a new dashboard and select the import option.
+
+- Copy the content of the Charon Dashboard json from the repository and paste it into the import box in Grafana. Click "Load" to proceed.
+
+- Finalize the import by clicking on the "Import" button. At this point, your dashboard should begin displaying metrics. Ensure your Charon client and Prometheus are operational for this to occur.
+
+## Example Alerting Rules
+
+To create alerts for Node-Exporter, follow these steps based on the sample rules provided on the "Awesome Prometheus alerts" page:
+
+1. Visit the **[Awesome Prometheus alerts](https://samber.github.io/awesome-prometheus-alerts/rules.html#host-and-hardware)** page. Here, you will find lists of Prometheus alerting rules categorized by hardware, system, and services.
+
+2. Depending on your need, select the category of alerts. For example, if you want to set up alerts for your system's CPU usage, click on the 'CPU' under the 'Host & Hardware' category.
+
+3. On the selected page, you'll find specific alert rules like 'High CPU Usage'. Each rule will provide the PromQL expression, alert name, and a brief description of what the alert does. You can copy these rules.
+
+4. Paste the copied rules into your Prometheus configuration file under the `rules` section. Make sure you understand each rule before adding it to avoid unnecessary alerts.
+
+5. Finally, save and apply the configuration file. Prometheus should now trigger alerts based on these rules.
+
+
+For alerts specific to Charon/Alpha, refer to the alerting rules available on this [ObolNetwork/monitoring](https://github.com/ObolNetwork/monitoring/tree/main/alerting-rules).
+
+## Understanding Alert Rules
+
+1. `ClusterBeaconNodeDown`This alert is activated when the beacon node in a specified Alpha cluster is offline. The beacon node is crucial for validating transactions and producing new blocks. Its unavailability could disrupt the overall functionality of the cluster.
+2. `ClusterBeaconNodeSyncing`This alert indicates that the beacon node in a specified Alpha cluster is synchronizing, i.e., catching up with the latest blocks in the cluster.
+3. `ClusterNodeDown`This alert is activated when a node in a specified Alpha cluster is offline.
+4. `ClusterMissedAttestations`:This alert indicates that there have been missed attestations in a specified Alpha cluster. Missed attestations may suggest that validators are not operating correctly, compromising the security and efficiency of the cluster.
+5. `ClusterInUnknownStatus`: This alert is designed to activate when a node within the cluster is detected to be in an unknown state. The condition is evaluated by checking whether the maximum of the app_monitoring_readyz metric is 0.
+6. `ClusterInsufficientPeers`:This alert is set to activate when the number of peers for a node in the Alpha M1 Cluster #1 is insufficient. The condition is evaluated by checking whether the maximum of the **`app_monitoring_readyz`** equals 4.
+7. `ClusterFailureRate`: This alert is activated when the failure rate of the Alpha M1 Cluster #1 exceeds a certain threshold.
+8. `ClusterVCMissingValidators`: This alert is activated if any validators in the Alpha M1 Cluster #1 are missing.
+9. `ClusterHighPctFailedSyncMsgDuty`: This alert is activated if a high percentage of sync message duties failed in the cluster. The alert is activated if the sum of the increase in failed duties tagged with "sync_message" in the last hour divided by the sum of the increase in total duties tagged with "sync_message" in the last hour is greater than 0.1.
+10. `ClusterNumConnectedRelays`: This alert is activated if the number of connected relays in the cluster falls to 0.
+11. PeerPingLatency: 1. This alert is activated if the 90th percentile of the ping latency to the peers in a cluster exceeds 500ms within 2 minutes.
+
+## Best Practices for Monitoring Charon Nodes & Cluster
+
+- **Establish Baselines**: Familiarize yourself with the normal operation metrics like CPU, memory, and network usage. This will help you detect anomalies.
+- **Define Key Metrics**: Set up alerts for essential metrics, encompassing both system-level and Charon-specific ones.
+- **Configure Alerts**: Based on these metrics, set up actionable alerts.
+- **Monitor Network**: Regularly assess the connectivity between nodes and the network.
+- **Perform Regular Health Checks**: Consistently evaluate the status of your nodes and clusters.
+- **Monitor System Logs**: Keep an eye on logs for error messages or unusual activities.
+- **Assess Resource Usage**: Ensure your nodes are neither over- nor under-utilized.
+- **Automate Monitoring**: Use automation to ensure no issues go undetected.
+- **Conduct Drills**: Regularly simulate failure scenarios to fine-tune your setup.
+- **Update Regularly**: Keep your nodes and clusters updated with the latest software versions.
+
+## Third-Party Services for Uptime Testing
+
+- [updown.io](https://updown.io/)
+- [Grafana synthetic Monitoring](https://grafana.com/grafana/plugins/grafana-synthetic-monitoring-app/)
+
+## Key metrics to watch to verify node health based on jobs
+
+- Node Exporter:
+
+**CPU Usage**: High or spiking CPU usage can be a sign of a process demanding more resources than it should.
+
+**Memory Usage**: If a node is consistently running out of memory, it could be due to a memory leak or simply under-provisioning.
+
+**Disk I/O**: Slow disk operations can cause applications to hang or delay responses. High disk I/O can indicate storage performance issues or a sign of high load on the system.
+
+**Network Usage**: High network traffic or packet loss can signal network configuration issues, or that a service is being overwhelmed by requests.
+
+**Disk Space**: Running out of disk space can lead to application errors and data loss.
+
+**Uptime**: The amount of time a system has been up without any restarts. Frequent restarts can indicate instability in the system.
+
+**Error Rates**: The number of errors encountered by your application. This could be 4xx/5xx HTTP errors, exceptions, or any other kind of error your application may log.
+
+**Latency**: The delay before a transfer of data begins following an instruction for its transfer.
+
+It is also important to check:
+
+- NTP clock skew
+- Process restarts and failures (eg. through `node_systemd`)
+- alert on high error and panic log counts.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/obol-monitoring.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/obol-monitoring.md
new file mode 100644
index 0000000000..8d9e0ceca1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/obol-monitoring.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 5
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+
+# Push Metrics to Obol Monitoring
+
+:::info
+This is **optional** and does not confer any special privileges within the Obol Network.
+:::
+
+You may have been provided with **Monitoring Credentials** used to push distributed validator metrics to Obol's central prometheus cluster to monitor, analyze, and improve your Distributed Validator Cluster's performance.
+
+The provided credentials needs to be added in `prometheus/prometheus.yml` replacing `$PROM_REMOTE_WRITE_TOKEN` and will look like:
+```
+obol20!tnt8U!C...
+```
+
+The updated `prometheus/prometheus.yml` file should look like:
+```
+global:
+ scrape_interval: 30s # Set the scrape interval to every 30 seconds.
+ evaluation_interval: 30s # Evaluate rules every 30 seconds.
+
+remote_write:
+ - url: https://vm.monitoring.gcp.obol.tech/write
+ authorization:
+ credentials: obol20!tnt8U!C...
+
+scrape_configs:
+ - job_name: 'charon'
+ static_configs:
+ - targets: ['charon:3620']
+ - job_name: "lodestar"
+ static_configs:
+ - targets: [ "lodestar:5064" ]
+ - job_name: 'node-exporter'
+ static_configs:
+ - targets: ['node-exporter:9100']
+```
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-builder-api.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-builder-api.md
new file mode 100644
index 0000000000..8ab1952f3e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-builder-api.md
@@ -0,0 +1,83 @@
+---
+sidebar_position: 2
+description: Run a distributed validator cluster with the builder API (MEV-Boost)
+---
+
+# Run a cluster with MEV-Boost
+
+:::warning
+Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+
+Charon's integration with MEV-Boost is also in an alpha state and requires a non-trivial amount of configuration to get working successfully. In the future, this process aims to be much more automated and seamless from a user's perspective.
+:::
+
+This quickstart guide focuses on configuring the builder API for Charon and supported Validator clients.
+
+## Getting started with Charon & the Builder API
+
+Running a distributed validator cluster with the builder API enabled will give the validators in the cluster access to the builder network. This builder network is a network of "Block Builders"
+who work with MEV searchers to produce the most valuable blocks a validator can propose.
+
+[MEV-Boost](https://boost.flashbots.net/) is one such product from flashbots that enables you to ask multiple
+block relays (who communicate with the "Block Builders") for blocks to propose. The block that pays the largest reward to the validator will be signed and returned to the relay for broadcasting to the wider
+network. The end result for the validator is generally an increased APY as they receive some share of the MEV.
+
+## Run MEV Boost
+
+Before running MEV-boost please check your cluster version, which can be found inside the cluster-lock.json file.
+If you are using cluster-lock version 1.7.0 or higher release verions, Obol seamlessly accommodates all validator client implementations within a mev-enabled distributed validator cluster.
+
+Currently, Charon with the builder API enabled, is compatible only with [Teku](https://github.com/ConsenSys/teku) on any cluster-lock versions below 1.7.0.
+
+## Builder API
+
+If you want to configure Charon and supported Validator clients to support the builder API feature, you can do that as well.
+
+Currently, Charon with the builder API enabled, is compatible with all validator client implementations in a mev-enabled distributed validator cluster seamlessly.
+
+### Charon
+
+Charon supports builder API with the `--builder-api` flag. To use builder API, one simply needs to add this flag to the `charon run` command:
+
+```
+charon run --builder-api
+```
+
+### Validator Clients
+
+#### Teku Validator Client
+
+Configuring the Teku validator client with Charon can be done by following the same process outlined in the [Teku official guide](https://docs.teku.consensys.net/how-to/configure/use-proposer-config-file).
+
+The validator client must be set up to use the `--validators-proposer-config` [flag](https://docs.teku.consensys.net/reference/cli#validators-proposer-config) with a value equal to `http://$CHARON_ENDPOINT:3600/teku_proposer_config`.
+
+Once the flag is set up, Obol distributed validators will be able to register to the builder network, submit blinded beacon blocks and gain a share of the MEV profits.
+
+#### Lighthouse Validator Client
+
+For lighthouse, configuring the VC with Charon can be done by following the same process outlined in the [Lighthouse official guide](https://lighthouse-book.sigmaprime.io/builders.html).
+
+The validator client must be set up to use the `--builder-proposals` [flag](https://lighthouse-book.sigmaprime.io/builders.html#how-to-connect-to-a-builder) with a value equal to `http://$CHARON_ENDPOINT:3600/proposer_config`.
+
+Once the flag is set up, Obol distributed validators will be able to register to the builder network, submit blinded beacon blocks and gain a share of the MEV profits.
+
+## Verify MEV Boost is functional
+
+Once you have executed the above steps, you can verify if your setup is functional by reviewing your proposed blocks on [beaconcha.in](https://beaconcha.in) dashboards or via the Relay API endpoints.
+
+:::warning
+Note that the mainnet block in the below description is taken only for representation, and not actually proposed by a distributed validator.
+:::warning
+
+As an example, if my validator was the block proposer for block `17370078` on mainnet, I can review the following resources:
+
+* [Beaconcha.in](https://beaconcha.in):
+ * Consider this [Mainnet block 17370078](https://beaconcha.in/block/17370078).
+ * If we check the `Block Extra Data` field under `Execution Payload`, we will see the tag `Illuminate Dmocratize Dstribute` (Hex:`0x496c6c756d696e61746520446d6f63726174697a6520447374726962757465`).
+ * Relays will generally add a tag to the block. Since this block was submitted via the Flashbots Relay, as a result it has the tag.
+ * All mainnet flashbots blocks have this tag `Illuminate Dmocratize Dstribute`.
+* [Relay API](https://flashbots.github.io/relay-specs/):
+ * If you navigate to the `Data API` section on the Relay API page, you will see an endpoint labeled `/relay/v1/data/bidtraces/proposer_payload_delivered`.
+ * You can add a query argument of `block_number` to this call to see if a block was submitted via that Relay.
+ * [Here](https://boost-relay.flashbots.net/relay/v1/data/bidtraces/proposer_payload_delivered?block_number=17370078) is the query for the example block 17370078.
+ * Blocks that have not been submitted to the Relay will return an empty array `[]`.
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-combine.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-combine.md
new file mode 100644
index 0000000000..787a44c098
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-combine.md
@@ -0,0 +1,153 @@
+---
+sidebar_position: 9
+description: Combine distributed validator private key shares to recover the validator private key.
+---
+
+# Combine DV private key shares
+
+:::warning
+Reconstituting Distributed Validator private key shares into a standard validator private key is a security risk, and can potentially cause your validator to be slashed.
+
+Only combine private keys as a last resort and do so with extreme caution.
+:::
+
+Combine distributed validator private key shares into an Ethereum validator private key.
+
+## Pre-requisites
+
+- Ensure you have the `.charon` directories of at least a threshold of the cluster's node operators.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Set up the key combination directory tree
+
+Rename each cluster node operator `.charon` directory in a different way to avoid folder name conflicts.
+
+We suggest naming them clearly and distinctly, to avoid confusion.
+
+At the end of this process, you should have a tree like this:
+
+```shell
+$ tree ./validators-to-be-combined
+
+validators-to-be-combined/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+...
+└── node*
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+:::warning
+Make sure to never mix the various `.charon` directories with one another.
+
+Doing so can potentially cause the combination process to fail.
+:::
+
+## Step 2. Combine the key shares
+
+Run the following command:
+
+```sh
+# Combine a clusters private keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 combine --cluster-dir /opt/charon/validators-to-be-combined
+```
+
+This command will create one subdirectory for each validator private key that has been combined, named after its public key.
+
+```shell
+$ tree ./validators-to-be-combined
+
+validators-to-be-combined/
+├── 0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── 0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+We can verify that the directory names are correct by looking at the lock file:
+
+```shell
+$ jq .distributed_validators[].distributed_public_key validators-to-be-combined/node0/cluster-lock.json
+"0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd"
+"0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106"
+```
+
+:::info
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+Ensure your distributed validator cluster is completely shut down before starting a replacement validator or you are likely to be slashed.
+:::
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-sdk.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-sdk.md
new file mode 100644
index 0000000000..177f53218a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-sdk.md
@@ -0,0 +1,134 @@
+---
+sidebar_position: 1
+description: Create a DV cluster using the Obol Typescript SDK
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Create a DV using the SDK
+
+:::warning
+
+The Obol-SDK is in an alpha state and should be used with caution., particularly on mainnet.
+:::
+
+This is a walkthrough of using the [Obol-SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk) to propose a four-node distributed validator cluster for creation using the [DV Launchpad](../../../dvl/intro.md).
+
+## Pre-requisites
+
+- You have [node.js](https://nodejs.org/en) installed.
+
+## Install the package
+
+Install the Obol-SDK package into your development environment
+
+
+
+
+ npm install --save @obolnetwork/obol-sdk
+
+
+
+
+ yarn add @obolnetwork/obol-sdk
+
+
+
+
+## Instantiate the client
+
+The first thing you need to do is create a instance of the Obol SDK client. The client takes two constructor parameters:
+
+- The `chainID` for the chain you intend to use.
+- An ethers.js [signer](https://docs.ethers.org/v6/api/providers/#Signer-signTypedData) object.
+
+```ts
+import { Client } from "@obolnetwork/obol-sdk";
+import { ethers } from "ethers";
+
+// Create a dummy ethers signer object with a throwaway private key
+const mnemonic = ethers.Wallet.createRandom().mnemonic?.phrase || "";
+const privateKey = ethers.Wallet.fromPhrase(mnemonic).privateKey;
+const wallet = new ethers.Wallet(privateKey);
+const signer = wallet.connect(null);
+
+// Instantiate the Obol Client for goerli
+const obol = new Client({ chainId: 5 }, signer);
+```
+
+## Propose the cluster
+
+List the Ethereum addresses of participating operators, along with withdrawal and fee recipient address data for each validator you intend for the operators to create.
+
+```ts
+// A config hash is a deterministic hash of the proposed DV cluster configuration
+const configHash = await obol.createClusterDefinition({
+ name: "SDK Demo Cluster",
+ operators: [
+ { address: "0xC35CfCd67b9C27345a54EDEcC1033F2284148c81" },
+ { address: "0x33807D6F1DCe44b9C599fFE03640762A6F08C496" },
+ { address: "0xc6e76F72Ea672FAe05C357157CfC37720F0aF26f" },
+ { address: "0x86B8145c98e5BD25BA722645b15eD65f024a87EC" },
+ ],
+ validators: [
+ {
+ fee_recipient_address: "0x3CD4958e76C317abcEA19faDd076348808424F99",
+ withdrawal_address: "0xE0C5ceA4D3869F156717C66E188Ae81C80914a6e",
+ },
+ ],
+});
+
+console.log(
+ `Direct the operators to https://goerli.launchpad.obol.tech/dv?configHash=${configHash} to complete the key generation process`
+);
+```
+
+## Invite the Operators to complete the DKG
+
+Once the Obol-API returns a `configHash` string from the `createClusterDefinition` method, you can use this identifier to invite the operators to the [Launchpad](../../../dvl/intro.md) to complete the process
+
+1. Operators navigate to `https://.launchpad.obol.tech/dv?configHash=` and complete the [run a DV with others](../group/quickstart-group-operator.md) flow.
+1. Once the DKG is complete, and operators are using the `--publish` flag, the created cluster details will be posted to the Obol API
+1. The creator will be able to retrieve this data with `obol.getClusterLock(configHash)`, to use for activating the newly created validator.
+
+## Retrieve the created Distributed Validators using the SDK
+
+Once the DKG is complete, the proposer of the cluster can retrieve key data such as the validator public keys and their associated deposit data messages.
+
+```js
+const clusterLock = await obol.getClusterLock(configHash);
+```
+
+Reference lock files can be found [here](https://github.com/ObolNetwork/charon/tree/main/cluster/testdata).
+
+## Activate the DVs using the deposit contract
+
+In order to activate the distributed validators, the cluster operator can retrieve the validators' associated deposit data from the lock file and use it to craft transactions to the `deposit()` method on the deposit contract.
+
+```js
+const validatorDepositData =
+ clusterLock.distributed_validators[validatorIndex].deposit_data;
+
+const depositContract = new ethers.Contract(
+ DEPOSIT_CONTRACT_ADDRESS, // 0x00000000219ab540356cBB839Cbe05303d7705Fa for Mainnet, 0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b for Goerli
+ depositContractABI, // https://etherscan.io/address/0x00000000219ab540356cBB839Cbe05303d7705Fa#code for Mainnet, and replace the address for Goerli
+ signer
+);
+
+const TX_VALUE = ethers.parseEther("32");
+
+const tx = await depositContract.deposit(
+ validatorDepositData.pubkey,
+ validatorDepositData.withdrawal_credentials,
+ validatorDepositData.signature,
+ validatorDepositData.deposit_data_root,
+ { value: TX_VALUE }
+);
+
+const txResult = await tx.wait();
+```
+
+## Usage Examples
+
+Examples of how our SDK can be used are found [here](https://github.com/ObolNetwork/obol-sdk-examples).
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-split.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-split.md
new file mode 100644
index 0000000000..5683ed353b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/quickstart-split.md
@@ -0,0 +1,89 @@
+---
+sidebar_position: 3
+description: Split existing validator keys
+---
+
+# Split existing validator private keys
+
+:::warning Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+
+This process should only be used if you want to split an _existing validator private key_ into multiple private key shares for use in a Distributed Validator Cluster. If your existing validator is not properly shut down before the Distributed Validator starts, your validator may be slashed.
+
+If you are starting a new validator, you should follow a [quickstart guide](../index.md) instead. :::
+
+Split an existing Ethereum validator key into multiple key shares for use in an [Obol Distributed Validator Cluster](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/key-concepts/README.md#distributed-validator-cluster).
+
+## Pre-requisites
+
+* Ensure you have the existing validator keystores (the ones to split) and passwords.
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Make sure `docker` is running before executing the commands below.
+
+## Step 1. Clone the charon repo and copy existing keystore files
+
+Clone the [charon](https://github.com/ObolNetwork/charon) repo.
+
+```sh
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon.git
+
+# Change directory
+cd charon/
+
+# Create a folder within this checked out repo
+mkdir split_keys
+```
+
+Copy the existing validator `keystore.json` files into this new folder. Alongside them, with a matching filename but ending with `.txt` should be the password to the keystore. E.g., `keystore-0.json` `keystore-0.txt`
+
+At the end of this process, you should have a tree like this:
+
+```shell
+├── split_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ ├── keystore-1.txt
+│ ...
+│ ├── keystore-*.json
+│ ├── keystore-*.txt
+```
+
+## Step 2. Split the keys using the charon docker command
+
+Run the following docker command to split the keys:
+
+```shell
+CHARON_VERSION= # E.g. v0.17.1
+CLUSTER_NAME= # The name of the cluster you want to create.
+WITHDRAWAL_ADDRESS= # The address you want to use for withdrawals.
+FEE_RECIPIENT_ADDRESS= # The address you want to use for fee payments.
+NODES= # The number of nodes in the cluster.
+
+docker run --rm -v $(pwd):/opt/charon obolnetwork/charon:${CHARON_VERSION} create cluster --name="${CLUSTER_NAME}" --withdrawal-addresses="${WITHDRAWAL_ADDRESS}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDRESS}" --split-existing-keys --split-keys-dir=/opt/charon/split_keys --nodes ${NODES} --network goerli
+```
+
+The above command will create `validator_keys` along with `cluster-lock.json` in `./.charon/cluster` for each node.
+
+Command output:
+
+```shell
+***************** WARNING: Splitting keys **********************
+ Please make sure any existing validator has been shut down for
+ at least 2 finalised epochs before starting the charon cluster,
+ otherwise slashing could occur.
+****************************************************************
+
+Created charon cluster:
+ --split-existing-keys=true
+
+.charon/cluster/
+├─ node[0-*]/ Directory for each node
+│ ├─ charon-enr-private-key Charon networking private key for node authentication
+│ ├─ cluster-lock.json Cluster lock defines the cluster lock file which is signed by all nodes
+│ ├─ validator_keys Validator keystores and password
+│ │ ├─ keystore-*.json Validator private share key for duty signing
+│ │ ├─ keystore-*.txt Keystore password files for keystore-*.json
+```
+
+These split keys can now be used to start a charon cluster.
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/self-relay.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/self-relay.md
new file mode 100644
index 0000000000..ae157214b7
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/advanced/self-relay.md
@@ -0,0 +1,36 @@
+---
+sidebar_position: 7
+description: Self-host a relay
+---
+
+# Self-Host a Relay
+
+If you are experiencing connectivity issues with the Obol hosted relays, or you want to improve your clusters latency and decentralization, you can opt to host your own relay on a separate open and static internet port.
+
+```
+# Figure out your public IP
+curl v4.ident.me
+
+# Clone the repo and cd into it.
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+cd charon-distributed-validator-node
+
+# Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname # Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname
+
+nano relay/docker-compose.yml
+
+docker compose -f relay/docker-compose.yml up
+```
+
+Test whether the relay is publicly accessible. This should return an ENR:
+`curl http://replace.with.public.ip.or.hostname:3640/enr`
+
+Ensure the ENR returned by the relay contains the correct public IP and port by decoding it with https://enr-viewer.com/.
+
+Configure **ALL** charon nodes in your cluster to use this relay:
+
+- Either by adding a flag: `--p2p-relays=http://replace.with.public.ip.or.hostname:3640/enr`
+- Or by setting the environment variable: `CHARON_P2P_RELAYS=http://replace.with.public.ip.or.hostname:3640/enr`
+
+Note that a local `relay/.charon/charon-enr-private-key` file will be created next to `relay/docker-compose.yml` to ensure a persisted relay ENR across restarts.
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/README.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/README.md
new file mode 100644
index 0000000000..f7eb065fd3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/README.md
@@ -0,0 +1,2 @@
+# alone
+
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/create-keys.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/create-keys.md
new file mode 100644
index 0000000000..776d4f26f0
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/create-keys.md
@@ -0,0 +1,55 @@
+---
+sidebar_position: 2
+description: Run all nodes in a distributed validator cluster
+---
+
+# create-keys
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Create the private key shares
+
+:::warning Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+:::info Running a Distributed Validator alone means that a single operator manages all of the nodes of the DV. Depending on the operators security preferences, the private key shares can be created centrally, and distributed securely to each node. This is the focus of the below guide.
+
+Alternatively, the private key shares can be created in a lower-trust manner with a [Distributed Key Generation](../../key-concepts.md#distributed-validator-key-generation-ceremony) process, which avoids the validator private key being stored in full anywhere, at any point in its lifecycle. Follow the [group quickstart](../group/index.md) instead for this latter case. :::
+
+### Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Make sure `docker` is running before executing the commands below.
+
+### Create the key shares locally
+
+Create the artifacts needed to run a DV cluster by running the following command to setup the inputs for the DV. Check the [Charon CLI reference](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/charon/charon-cli-reference/README.md) for additional optional flags to set.\
+\
+
+
+```
+
+ WITHDRAWAL_ADDR=[ENTER YOUR WITHDRAWAL ADDRESS HERE]
+
+
+ FEE_RECIPIENT_ADDR=[ENTER YOUR FEE RECIPIENT ADDRESS HERE]
+
+
+ NB_NODES=[ENTER AMOUNT OF DESIRED NODES]
+
+
+```
+
+Then, run this command to create all the key shares and cluster artifacts locally:\
+\
+
+
+```
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create cluster --name="Quickstart Cluster" --withdrawal-addresses="{'\$\{WITHDRAWAL_ADDR\}'}" --fee-recipient-addresses="{'\$\{FEE_RECIPIENT_ADDR\}'}" --nodes="{'\$\{NB_NODES\}'}" --network="goerli" --num-validators=1 --cluster-dir="cluster"
+
+```
+
+Go to the [Obol Launchpad](https://goerli.launchpad.obol.tech) and select `Create a distributed validator alone`. Follow the steps to configure your DV cluster.
+
+After successful completion, a subdirectory `.charon/cluster` should be created. In it are as many folders as nodes of the cluster. Each folder contains partial private keys that together make up the distributed validator described in `.charon/cluster/cluster-lock.json`.
+
+Once ready, you can move to [deploying this cluster physically](deploy.md).
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/deploy.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/deploy.md
new file mode 100644
index 0000000000..b58a629538
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/deploy.md
@@ -0,0 +1,25 @@
+---
+sidebar_position: 3
+description: Move the private key shares to the nodes and run the cluster
+---
+
+# Deploy the cluster
+
+To distribute your cluster physically and start the DV, each node needs a directory called `.charon` with one (or several) private key shares within it as per the structure below.
+
+```
+├── .charon
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── ...
+│ ├── keystore-N.json
+│ └── keystore-N.txt
+```
+
+:point\_right: Use the single node [docker compose](https://github.com/ObolNetwork/charon-distributed-validator-node), the kubernetes [manifests](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node), or the [helm chart](https://github.com/ObolNetwork/helm-charts) example repos to get your nodes up and connected after loading the `.charon` folder artifacts into them appropriately.
+
+:::warning Right now, the `charon-distributed-node-cluster` repo [used earlier to create the private keys](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/alone/create-keys/README.md) outputs a folder structure like `.charon/ cluster/node0/validator_keys`. Make sure to grab the `./node0/*` folder, RENAME it to `.charon` and then move it to one of the single node repo above to have a working cluster as per the folder structure shown above. :::
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/test-locally.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/test-locally.md
new file mode 100644
index 0000000000..08b1ba5af0
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/alone/test-locally.md
@@ -0,0 +1,81 @@
+---
+sidebar_position: 1
+description: Test the solo cluster locally
+---
+
+# Run a test cluster locally
+:::warning
+This is a demo repo to understand how Distributed Validators work and is not suitable for a production deployment.
+
+This guide only runs one Execution Client, one Consensus Client, and 6 Distributed Validator Charon Client + Validator Client pairs on a single docker instance. As a consequence, if this machine fails, there will not be fault tolerance.
+
+Follow these two guides sequentially instead for production deployment: [create keys centrally](./create-keys.md) and [how to deploy them](./deploy.md).
+:::
+
+The [`charon-distributed-validator-cluster`](https://github.com/ObolNetwork/charon-distributed-validator-cluster) repo contains six charon clients in separate docker containers along with an execution client and consensus client, simulating a Distributed Validator cluster running.
+
+The default cluster consists of:
+- [Nethermind](https://github.com/NethermindEth/nethermind), an execution layer client
+- [Lighthouse](https://github.com/sigp/lighthouse), a consensus layer client
+- Six [charon](https://github.com/ObolNetwork/charon) nodes
+- A mixture of validator clients:
+ - VC0: [Lighthouse](https://github.com/sigp/lighthouse)
+ - vc1: [Teku](https://github.com/ConsenSys/teku)
+ - vc2: [Nimbus](https://github.com/status-im/nimbus-eth2)
+ - vc3: [Lighthouse](https://github.com/sigp/lighthouse)
+ - vc4: [Teku](https://github.com/ConsenSys/teku)
+ - vc5: [Nimbus](https://github.com/status-im/nimbus-eth2)
+
+## Pre-requisites
+
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Ensure you have [git](https://git-scm.com/downloads) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Create the key shares locally
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+ `.env.sample` is a sample environment file that allows overriding default configuration defined in `docker-compose.yml`. Uncomment and set any variable to override its value.
+
+3. Create the artifacts needed to run a DV cluster by running the following command:
+
+ ```sh
+ # Enter required validator addresses
+ WITHDRAWAL_ADDR=
+ FEE_RECIPIENT_ADDR=
+
+ # Create a distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create cluster --name="mycluster" --cluster-dir=".charon/cluster/" --withdrawal-addresses="${WITHDRAWAL_ADDR}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDR}" --nodes 6 --network goerli --num-validators=1
+ ```
+
+These commands will create six folders within `.charon/cluster`, one for each node created. You will need to rename `node*` to `.charon` for each folder to be found by the default `charon run` command, or you can use `charon run --private-key-file=".charon/cluster/node0/charon-enr-private-key" --lock-file=".charon/cluster/node0/cluster-lock.json"` for each instance of charon you start.
+
+## Start the cluster
+
+Run this command to start your cluster containers
+
+```sh
+# Start the distributed validator cluster
+docker compose up --build
+```
+Check the monitoring dashboard and see if things look all right
+
+```sh
+# Open Grafana
+open http://localhost:3000/d/laEp8vupp
+```
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/group/README.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/README.md
new file mode 100644
index 0000000000..56f83ad21c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/README.md
@@ -0,0 +1,2 @@
+# group
+
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/group/index.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/index.md
new file mode 100644
index 0000000000..71bdacd245
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/index.md
@@ -0,0 +1,12 @@
+# Run a cluster as a group
+
+:::warning Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+:::info Running a Distributed Validator with others typically means that several operators run the various nodes of the cluster. In such a case, the key shares should be created with a [distributed key generation process](../../key-concepts.md#distributed-validator-key-generation-ceremony), avoiding the private key being stored in full, anywhere. :::
+
+There are two sequential user journeys when setting up a DV cluster with others. Each comes with its own quickstart:
+
+1. The [**Creator** (**Leader**) Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/group/group/quickstart-group-leader-creator/README.md), which outlines the steps to propose a Distributed Validator Cluster.
+ * In the **Creator** case, the person creating the cluster _will NOT_ be a node operator in the cluster.
+ * In the **Leader** case, the person creating the cluster _will_ be a node operator in the cluster.
+2. The [**Operator** Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/group/group/quickstart-group-operator/README.md) which outlines the steps to create a Distributed Validator Cluster proposed by a leader or creator using the above process.
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-cli.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-cli.md
new file mode 100644
index 0000000000..9bc5a830ba
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-cli.md
@@ -0,0 +1,124 @@
+---
+sidebar_position: 3
+description: Run one node in a multi-operator distributed validator cluster using the CLI
+---
+
+# Using the CLI
+
+:::warning Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster via the CLI.
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+* Decide who the Leader or Creator of your cluster will be. Only them have to perform [step 2](quickstart-cli.md#step-2-leader-creates-the-dkg-configuration-file-and-distributes-it-to-everyone-else) and [step 5](quickstart-cli.md#step-5-activate-the-deposit-data) in this quickstart. They do not get any special privilege.
+ * In the **Leader** case, the operator creating the cluster will also operate a node in the cluster.
+ * In the **Creator** case, the cluster is created by an external party to the cluster.
+
+## Step 1. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, all operators (including the leader but NOT a creator) need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/faq/errors.mdx) for their charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+Finally, share your ENR with the leader or creator so that he/she can proceed to Step 2.
+
+## Step 2. Leader or Creator creates the DKG configuration file and distribute it to cluster operators
+
+1. The leader or creator of the cluster will prepare the `cluster-definition.json` file for the Distributed Key Generation ceremony using the `charon create dkg` command.
+
+```
+# Prepare an environment variable file
+cp .env.create_dkg.sample .env.create_dkg
+```
+
+2. Populate the `.env.create_dkg` file created with the `cluster name`, the `fee recipient` and `withdrawal Ethereum addresses`, and the `ENRs` of all the operators participating in the cluster.
+ * The file generated is hidden by default. To view it, run `ls -al` in your terminal. Else, if you are on `macOS`, press `Cmd + Shift + .` to view all hidden files in the finder application.
+3. Run the `charon create dkg` command that generates DKG cluster-definition.json file.
+
+```
+docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.17.1 create dkg
+```
+
+This command should output a file at `.charon/cluster-definition.json`. This file needs to be shared with the other operators in a cluster.
+
+## Step 3. Run the DKG
+
+After receiving the `cluster-definition.json` file created by the leader, cluster operators should ideally save it in the `.charon/` folder that was created during step 1, alternatively the `--definition-file` flag can override the default expected location for this file.
+
+Every cluster member then participates in the DKG ceremony. For Charon v1, this needs to happen relatively synchronously between participants at an agreed time.
+
+```
+# Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 dkg
+```
+
+> This is a helpful [video walkthrough](https://www.youtube.com/watch?v=94Pkovp5zoQ\&ab_channel=ObolNetwork).
+
+Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder. These include:
+
+* A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+* A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+* A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+:::warning Please make sure to create a backup of `.charon/validator_keys`. **If you lose your keys you won't be able to start the DV cluster successfully.** :::
+
+:::info The `cluster-lock` and `deposit-data` files are identical for each operator and can be copied if lost. :::
+
+## Step 4. Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (`geth`) and a consensus layer client (`lighthouse`).
+
+Before completing these instructions, you should assign a static local IP address to your device (extending the DHCP reservation indefinitely or removing the device from the DCHP pool entirely if you prefer), and port forward the TCP protocol on the public port `:3610` on your router to your device's local IP address on the same port. This step is different for every person's home internet, and can be complicated by the presence of dynamic public IP addresses. We are currently working on making this as easy as possible, but for the time being, a distributed validator cluster isn't going to work very resiliently if all charon nodes cannot talk directly to one another and instead need to have an intermediary node forwarding traffic to them.
+
+**Caution**: If you manually update `docker-compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
+
+**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.
+
+```
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up
+
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
+
+* That your charon client can connect to the configured beacon client.
+* That your charon client can connect to all peers.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually \~16 hours after the deposit is made).
+
+If at any point you need to turn off your node, you can run:
+
+```
+# Shut down the currently running distributed validator node
+docker compose down
+```
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-leader-creator.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-leader-creator.md
new file mode 100644
index 0000000000..6be53b7ca3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-leader-creator.md
@@ -0,0 +1,125 @@
+---
+sidebar_position: 1
+description: A leader/creator creates a cluster configuration to be shared with operators
+---
+
+# quickstart-group-leader-creator
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Creator & Leader Journey
+
+:::warning Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist with the preparation of a distributed validator key generation ceremony. Select the _Leader_ tab if you **will** be an operator participating in the cluster, and select the _Creator_ tab if you **will NOT** be an operator in the cluster.
+
+These roles hold no position of privilege in the cluster, they only set the initial terms of the cluster that the other operators agree to.
+
+The person creating the cluster will be a node operator in the cluster.\
+\
+
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+
+The person creating the cluster will not be a node operator in the cluster.
+
+### Overview Video
+
+### Step 1. Collect Ethereum addresses of the cluster operators
+
+Before starting the cluster creation, you will need to collect one Ethereum address per operator in the cluster. They will need to be able to sign messages through metamask with this address. Broader wallet support will be added in future.
+
+### Step 2. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, you need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/faq/errors.mdx#enrs-keys) for your charon client. Operators in your cluster will also need to do this step, as per their [quickstart](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-operator/README.md#step-2-create-and-back-up-a-private-key-for-charon). This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update docker permissions](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/faq/errors/README.md#docker-permission-denied-error) to allow the command to run successfully.
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+This step is not needed and you can move on to [Step 3](quickstart-group-leader-creator.md#step-3-create-the-dkg-configuration-file-and-distribute-it-to-cluster-operators).
+
+### Step 3. Create the DKG configuration file and distribute it to cluster operators
+
+You will prepare the configuration file for the distributed key generation ceremony using the launchpad.
+
+1. Go to the [DV Launchpad](https://goerli.launchpad.obol.tech)
+2. Connect your wallet
+
+
+
+3. Select `Create a Cluster with a group` then `Get Started`.
+
+
+
+4. Follow the flow and accept the advisories.
+5. Configure the Cluster
+
+ * Input the `Cluster Name` & `Cluster Size` (i.e. number of operators in the cluster). The threshold for the cluster to operate sucessfully will update automatically.
+ * ⚠️ Leave the `Non-Operator` toggle OFF.
+ * ⚠️ Turn the `Non-Operator` toggle ON.
+ * Input the Ethereum addresses for each operator collected during [step 1](quickstart-group-leader-creator.md#step-1-collect-ethereum-addresses-of-the-cluster-operators).
+ * Select the desired amount of validators (32 ETH each) the cluster will run.
+ * Paste your `ENR` generated at [Step 2](quickstart-group-leader-creator.md#step-2-create-and-back-up-a-private-key-for-charon).
+ * Select the `Withdrawal Addresses` method. Use `Single address` to receive the principal and fees to a single address or `Splitter Contracts` to share them among operators.
+ * Enter the `Withdrawal Address` that will receive the validator effective balance at exit and when balance skimming occurs.
+ * Enter the `Fee Recipient Address` to receive MEV rewards (if enabled), and block proposal priority fees.
+ * \
+
+
+ You can set them to be the same as your connected wallet address in one click.\
+ \
+
+
+ 
+
+ * Enter the Ethereum address to claim the validator principal (32 ether) at exit.
+ * Enter the Ethereum addresses and their percentage split of the validator's rewards. Validator rewards include consensus rewards, MEV rewards and proposal priority fees.
+
+ \
+
+
+ 
+
+ * Click `Create Cluster Configuration`
+
+* 6\. Review the cluster configuration
+* 6\. Deploy the withdrawal manager contracts by signing the two transactions with your wallet.
+* 7\. You will be asked to confirm your configuration and to sign:
+*
+ * The `config_hash`. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+ * The `operator_config_hash`. This is your acceptance of the terms as a participating node operator.
+ * Your `ENR`. Signing your ENR authorises the corresponding private key to act on your behalf in the cluster.
+* 7\. You will be asked to confirm your configuration and to sign:
+*
+ * The `config_hash`. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+
+8. Share your cluster invite link with the operators. Following the link will show you a screen waiting for other operators to accept the configuration you created.
+
+
+
+👉 Once every participating operator has signed their approval to the terms, you will continue the [**Operator** journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-operator/README.md#step-3-run-the-dkg) by completing the distributed key generation step.
+
+Your journey ends here and you can monitor with the link whether the operators confirm their agreement to the cluster by signing their approval. Future versions of the launchpad will allow a creator to track a distributed validator's lifecycle in its entirety.
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-operator.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-operator.md
new file mode 100644
index 0000000000..905086ff82
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-operator.md
@@ -0,0 +1,133 @@
+---
+sidebar_position: 1
+description: A node operator joins a DV cluster
+---
+
+# Operator Journey
+
+:::warning Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster after receiving an cluster invite link from a leader or creator.
+
+## Overview Video
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+
+## Step 1. Share an Ethereum address with your Leader or Creator
+
+Before starting the cluster creation, make sure you have shared an Ethereum address with your cluster **Leader** or **Creator**. If you haven't chosen someone as a Leader or Creator yet, please go back to the [Quickstart intro](index.md) and define one person to go through the [Leader & Creator Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/group/quickstart-group-leader-creator/README.md) before moving forward.
+
+## Step 2. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, you need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/faq/errors.mdx#enrs-keys) for your charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.17.1 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update docker permissions](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/faq/errors/README.md#docker-permission-denied-error) to allow the command to run successfully.
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+## Step 3. Join and sign the cluster configuration
+
+After receiving the invite link created by the **Leader** or **Creator**, you will be able to join and sign the cluster configuration created.
+
+1. Go to the DV launchpad link provided by the leader or creator.
+2. Connect your wallet using the Ethereum address provided to the leader in [step 1](quickstart-group-operator.md#step-1-share-an-ethereum-address-with-your-leader-or-creator).
+
+
+
+3. Review the operators addresses submitted and click `Get Started` to continue.
+
+
+
+4. Review and accept the advisories.
+5. Review the configuration created by the leader or creator and add your `ENR` generated in [step 2](quickstart-group-operator.md#step-2-create-and-back-up-a-private-key-for-charon).
+
+
+
+6. Sign the following with your wallet
+ * The config hash. This is a hashed representation of all of the details for this cluster.
+ * Your own `ENR`. This signature authorises the key represented by this ENR to act on your behalf in the cluster.
+7. Wait for all the other operators in your cluster to do the same.
+
+## Step 4. Run the DKG
+
+:::info For the [DKG](../../../charon/dkg.md) to complete, all operators need to be running the command simultaneously. It helps to coordinate an agreed upon time amongst operators at which to run the command. :::
+
+### Overview
+
+1. Once all operators successfully signed, your screen will automatically advance to the next step and look like this. Click `Continue`. If you closed the tab, just go back to the invite link shared by the leader and connect your wallet.
+
+
+
+2. You have two options to perform the DKG.
+
+ 1. **Option 1** and default is to copy and run the `docker` command on the screen into your terminal. It will retrieve the remote cluster details and begin the DKG process.
+ 2. **Option 2** (Manual DKG) is to download the `cluster-definition` file manually and move it to the hidden `.charon` folder. Then, every cluster member participates in the DKG ceremony by running the command displayed.
+
+ 
+3. Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder. These include:
+ * A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+ * A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+ * A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+:::warning Please make sure to create a backup of `.charon/validator_keys`. **If you lose your keys you won't be able to start the DV cluster successfully.** :::
+
+:::info The `cluster-lock` and `deposit-data` files are identical for each operator and can be copied if lost. :::
+
+## Step 5. Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (`geth`) and a consensus layer client (`lighthouse`).
+
+Before completing these instructions, you should assign a static local IP address to your device (extending the DHCP reservation indefinitely or removing the device from the DCHP pool entirely if you prefer), and port forward the TCP protocol on the public port `:3610` on your router to your device's local IP address on the same port. This step is different for every person's home internet, and can be complicated by the presence of dynamic public IP addresses. We are currently working on making this as easy as possible, but for the time being, a distributed validator cluster isn't going to work very resiliently if all charon nodes cannot talk directly to one another and instead need to have an intermediary node forwarding traffic to them.
+
+**Caution**: If you manually update `docker compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
+
+**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.
+
+```
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up
+
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
+
+* That your charon client can connect to the configured beacon client.
+* That your charon client can connect to all peers.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually \~16 hours after the deposit is made).
+
+If at any point you need to turn off your node, you can run:
+
+```
+# Shut down the currently running distributed validator node
+docker compose down
+```
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/index.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/index.md
new file mode 100644
index 0000000000..581a521069
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/index.md
@@ -0,0 +1,8 @@
+# Quickstart Guides
+
+:::warning Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+There are two ways to set up a distributed validator and each comes with its own quickstart
+
+1. [Run a DV cluster as a **group**](group/index.md), where several operators run the nodes that make up the cluster. In this setup, the key shares are created using a distributed key generation process, avoiding the full private keys being stored in full in any one place. This approach can also be used by single operators looking to manage all nodes of a cluster but wanting to create the key shares in a trust-minimised fashion.
+2. [Run a DV cluster **alone**](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/quickstart/alone/create-keys/README.md), where a single operator runs all the nodes of the DV. Depending on trust assumptions, there is not necessarily the need to create the key shares via a DKG process. Instead the key shares can be created in a centralised manner, and distributed securely to the nodes.
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/quickstart-exit.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/quickstart-exit.md
new file mode 100644
index 0000000000..8cd1c69d9d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/quickstart-exit.md
@@ -0,0 +1,87 @@
+---
+sidebar_position: 5
+description: Exit a validator
+---
+
+# quickstart-exit
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Exit a DV
+
+:::warning Charon is in an alpha state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas. In the case of a DV, each charon node needs to broadcast a partial exit to the other nodes of the cluster. Once a threshold of partial exits has been received by any node, the full voluntary exit will be sent to the beacon chain.
+
+This process will take 27 hours or longer depending on the current length of the exit queue.
+
+:::info
+
+* A threshold of operators needs to run the exit command for the exit to succeed.
+* If a charon client restarts after the exit command is run but before the threshold is reached, it will lose the partial exits it has received from the other nodes. If all charon clients restart and thus all partial exits are lost before the required threshold of exit messages are received, operators will have to rebroadcast their partial exit messages. :::
+
+### Run the `voluntary-exit` command on your validator client
+
+Run the appropriate command on your validator client to broadcast an exit message from your validator client to its upstream charon client.
+
+It needs to be the validator client that is connected to your charon client taking part in the DV, as you are only signing a partial exit message, with a partial private key share, which your charon client will combine with the other partial exit messages from the other operators.
+
+:::info
+
+* All operators need to use the same `EXIT_EPOCH` for the exit to be successful. Assuming you want to exit as soon as possible, the default epoch of `162304` included in the below commands should be sufficient.
+* Partial exits can be broadcasted by any validator client as long as the sum reaches the threshold for the cluster. :::
+
+```
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=162304`}
+
+
+```
+
+The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path `/home/user/data/charon` to the newly created `/home/user/data/wd` directory.\
+\
+For each file in the `/home/user/data/wd/secrets` directory, it:
+
+* Extracts the filename without the extension as the file name is the public key
+* Appends ``{String.raw`--validator=`}`` to the `command` variable.
+* Executes a program called `nimbus_beacon_node` with the following arguments:
+* `deposits exit` : Exits validators
+* `$command` : The generated command string from the loop.
+* `--epoch=162304` : The epoch upon which to submit the voluntary exit.
+* `--rest-url=http://charon:3600/` : Specifies the Charon `host:port`
+* `--data-dir=/home/user/charon/` : Specifies the `Keystore path` which has all the validator keys. There will be a `secrets` and a `validators` folder inside it.
+
+```
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+ mkdir /home/user/data/wd cp -r /home/user/data/charon/ /home/user/data/wd command=""; \ for file in /home/user/data/wd/secrets/*; do \ filename=$(basename "$file" | cut -d. -f1); \ command+=" --validator=$filename"; \ done; \
+/home/user/nimbus_beacon_node deposits exit $command --epoch=162304 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'}
node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments: --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.--data-dir=/opt/data
: Specifies the folder where the key stores were imported.--exitEpoch=162304
: The epoch upon which to submit the voluntary exit.--network=goerli
: Specifies the network.--yes
: Skips confirmation prompt. {String.rawdocker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=162304 --network=goerli --yes'`}
+
+```
+
+Once a threshold of exit signatures has been received by any single charon client, it will craft a valid voluntary exit message and will submit it to the beacon chain for inclusion. You can monitor partial exits stored by each node in the [Grafana Dashboard](https://github.com/ObolNetwork/charon-distributed-validator-node).
+
+### Exit epoch and withdrawable epoch
+
+The process of a validator exiting from staking takes variable amounts of time, depending on how many others are exiting at the same time.
+
+Immediately upon broadcasting a signed voluntary exit message, the exit epoch and withdrawable epoch values are calculated based off the current epoch number. These values determine exactly when the validator will no longer be required to be online performing validation, and when the validator is eligible for a full withdrawal respectively.
+
+1. Exit epoch - epoch at which your validator is no longer active, no longer earning rewards, and is no longer subject to slashing rules. :::warning Up until this epoch (while "in the queue") your validator is expected to be online and is held to the same slashing rules as always. Do not turn your DV node off until this epoch is reached. :::
+2. Withdrawable epoch - epoch at which your validator funds are eligible for a full withdrawal during the next validator sweep. This occurs 256 epochs after the exit epoch, which takes \~27.3 hours.
+
+### How to verify a validator exit
+
+Consult the examples below and compare them to your validator's monitoring to verify that exits from each operator in the cluster are being received. This example is a cluster of 4 nodes with 2 validators and threshold of 3 nodes broadcasting exits are needed.
+
+1. Operator 1 broadcasts an exit on validator client 1.  
+2. Operator 2 broadcasts an exit on validator client 2.  
+3. Operator 3 broadcasts an exit on validator client 3.  
+
+At this point, the threshold of 3 has been reached and the validator exit process will start. The logs will show the following: 
+
+:::tip Once a validator has broadcasted an exit message, it must continue to validate for at least 27 hours or longer. Do not shut off your distributed validator nodes until your validator is fully exited. :::
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/quickstart-mainnet.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/quickstart-mainnet.md
new file mode 100644
index 0000000000..6b1cd348ec
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/quickstart-mainnet.md
@@ -0,0 +1,111 @@
+---
+sidebar_position: 7
+description: Run a cluster on mainnet
+---
+
+# Run a DV on mainnet
+
+:::warning Charon is in an alpha state, and you should proceed only if you accept the risk, the [terms of use](https://obol.tech/terms.pdf), and have tested running a Distributed Validator on a testnet first.
+
+Distributed Validators created for goerli cannot be used on mainnet and vice versa, please take caution when creating, backing up, and activating mainnet validators. :::
+
+This section is intended for users who wish to run their Distributed Validator on Ethereum mainnet.
+
+### Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+
+### Steps
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) repo and `cd` into the directory.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+```
+
+2. If you have already cloned the repo, make sure that it is [up-to-date](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/int/quickstart/update/README.md).
+3. Copy the `.env.sample` file to `.env`
+
+```
+cp -n .env.sample .env
+```
+
+4. In your `.env` file, uncomment and set values for `NETWORK` & `LIGHTHOUSE_CHECKPOINT_SYNC_URL`
+
+```
+...
+# Overrides network for all the relevant services.
+NETWORK=mainnet
+...
+# Checkpoint sync url used by lighthouse to fast sync.
+LIGHTHOUSE_CHECKPOINT_SYNC_URL=https://mainnet.checkpoint.sigp.io/https://eth-clients.github.io/checkpoint-sync-endpoints/#mainnet
+...
+```
+
+Note that you can choose any checkpoint sync url from https://eth-clients.github.io/checkpoint-sync-endpoints/#mainnet.
+
+Your DV stack is now mainnet ready 🎉
+
+#### Remote mainnet beacon node
+
+:::warning Using a remote beacon node will impact the performance of your Distributed Validator and should be used sparingly. :::
+
+If you already have a mainnet beacon node running somewhere and you want to use that instead of running EL (`geth`) & CL (`lighthouse`) as part of the repo, you can disable these images. To do so, follow these steps:
+
+1. Copy the `docker-compose.override.yml.sample` file
+
+```
+cp -n docker-compose.override.yml.sample docker-compose.override.yml
+```
+
+2. Uncomment the `profiles: [disable]` section for both `geth` and `lighthouse`. The override file should now look like this
+
+```
+services:
+ geth:
+ # Disable geth
+ profiles: [disable]
+ # Bind geth internal ports to host ports
+ #ports:
+ #- 8545:8545 # JSON-RPC
+ #- 8551:8551 # AUTH-RPC
+ #- 6060:6060 # Metrics
+
+ lighthouse:
+ # Disable lighthouse
+ profiles: [disable]
+ # Bind lighthouse internal ports to host ports
+ #ports:
+ #- 5052:5052 # HTTP
+ #- 5054:5054 # Metrics
+...
+```
+
+3. Then, uncomment and set the `CHARON_BEACON_NODE_ENDPOINTS` variable in the `.env` file to your mainnet beacon node's URL
+
+```
+...
+# Connect to one or more external beacon nodes. Use a comma separated list excluding spaces.
+CHARON_BEACON_NODE_ENDPOINTS=
+...
+```
+
+#### Exit a mainnet distributed validator
+
+If you want to exit your mainnet validator, you need to uncomment and set the `EXIT_EPOCH` variable in the `.env` file
+
+```
+...
+# Cluster wide consistent exit epoch. Set to latest for fork version, see `curl $BEACON_NODE/eth/v1/config/fork_schedule`
+# Currently, the latest fork is capella (epoch: 194048)
+EXIT_EPOCH=194048
+...
+```
+
+Note that `EXIT_EPOCH` should be `194048` after the [shapella fork](https://blog.ethereum.org/2023/03/28/shapella-mainnet-announcement).
diff --git a/docs/versioned_docs/version-v0.17.1/int/quickstart/update.md b/docs/versioned_docs/version-v0.17.1/int/quickstart/update.md
new file mode 100644
index 0000000000..3187bbc0bf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/int/quickstart/update.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 6
+description: Update your DV cluster with the latest Charon release
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Update a DV
+
+It is highly recommended to upgrade your DV stack from time to time. This ensures that your node is secure, performant, up-to-date and you don't miss important hard forks.
+
+To do this, follow these steps:
+
+### Navigate to the node directory
+
+
+
+
+
+ cd charon-distributed-validator-node
+
+
+
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+
+
+
+### Pull latest changes to the repo
+```
+git pull
+```
+
+### Create (or recreate) your DV stack
+```
+docker compose up -d --build
+```
+:::warning
+If you run more than one node in a DV Cluster, please take caution upgrading them simultaneously. Particularly if you are updating or changing the validator client used or recreating disks. It is recommended to update nodes on a sequential basis to minimse liveness and safety risks.
+:::
+
+### Conflicts
+
+:::info
+You may get a `git conflict` error similar to this:
+:::
+```markdown
+error: Your local changes to the following files would be overwritten by merge:
+prometheus/prometheus.yml
+...
+Please commit your changes or stash them before you merge.
+```
+This is probably because you have made some changes to some of the files, for example to the `prometheus/prometheus.yml` file.
+
+To resolve this error, you can either:
+
+- Stash and reapply changes if you want to keep your custom changes:
+ ```
+ git stash # Stash your local changes
+ git pull # Pull the latest changes
+ git stash apply # Reapply your changes from the stash
+ ```
+ After reapplying your changes, manually resolve any conflicts that may arise between your changes and the pulled changes using a text editor or Git's conflict resolution tools.
+
+- Override changes and recreate configuration if you don't need to preserve your local changes and want to discard them entirely:
+ ```
+ git reset --hard # Discard all local changes and override with the pulled changes
+ docker-compose up -d --build # Recreate your DV stack
+ ```
+ After overriding the changes, you will need to recreate your DV stack using the updated files.
+ By following one of these approaches, you should be able to handle Git conflicts when pulling the latest changes to your repository, either preserving your changes or overriding them as per your requirements.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.17.1/intro.md b/docs/versioned_docs/version-v0.17.1/intro.md
new file mode 100644
index 0000000000..10a81b9143
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 20 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.17.1/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.17.1/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..3209b873f5
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sc/01_introducing-obol-managers.md
@@ -0,0 +1,89 @@
+---
+description: Smart contracts for managing Distributed Validators
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators. These contracts include:
+
+- Withdrawal Recipients: Contracts used for a validator's withdrawal address.
+- Split contracts: Contracts to split ether across multiple entities. Developed by [0xSplits](https://0xsplits.xyz)
+- Split controllers: Contracts that can mutate a splitter's configuration.
+
+Two key goals of validator reward management are:
+
+1. To be able to differentiate reward ether from principal ether such that node operators can be paid a percentage the _reward_ they accrue for the principal provider rather than a percentage of _principal+reward_.
+2. To be able to withdraw the rewards in an ongoing manner without exiting the validator.
+
+Without access to the consensus layer state in the EVM to check a validator's status or balance, and due to the incoming ether being from an irregular state transition, neither of these requirements are easily satisfiable.
+
+The following sections outline different contracts that can be composed to form a solution for one or both goals.
+
+## Withdrawal Recipients
+
+Validators have two streams of revenue, the consensus layer rewards and the execution layer rewards. Withdrawal Recipients focus on the former, receiving the balance skimming from a validator with >32 ether in an ongoing manner, and receiving the principal of the validator upon exit.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic example of a withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owner's address (or another address specified). This contract does no accounting on the amount of ether that is withdrawn, nor does it differentiate reward from principal.
+
+### Optimistic Withdrawal Recipient
+
+This is the primary withdrawal recipient Obol uses, as it allows for the separation of reward from principal, as well as permitting the ongoing withdrawal of accruing rewards.
+
+An Optimistic Withdrawal Recipient contract takes three inputs when deployed:
+
+- A _principal_ address: The address that controls where the principal ether will be transferred post-exit.
+- A _reward_ address: The address where the accruing reward ether is transferred to.
+- The amount of ether that makes up the principal.
+
+This contract **assumes that any ether that has appeared in it's address since it was last able to do balance accounting is reward from a successful validator** (or number of validators) unless the change is > 16 ether. This means balance skimming is immediately claimable as reward, while an inflow of e.g. 31 ether is tracked as a return of principal (despite being slashed in this example).
+
+:::warning
+
+Worst-case mass slashings can theoretically exceed 16 ether, if this were to occur, the returned principal would be misclassified as a reward, and distributed to the wrong address. This risk is the drawback that makes this contract variant 'optimistic'. If you intend to use this contract type, **it is important you understand and accept this risk**, however minute.
+
+The alternative is to use an 0xSplits waterfall contract, which won't allow the claiming of rewards until all principal ether has been pulled, meaning validators need to be exited for operators to claim their CL rewards.
+
+:::
+
+This contract fits both design goals and can be used with thousands of validators. If you deploy an Optimistic Withdrawal Recipient with a principal higher than you actually end up using, nothing goes wrong. If you activate more validators than you specified in your contract deployment, you will record too much ether as reward and will overpay your reward address with ether that was principal ether, not earned ether. Current iterations of this contract are not designed for editing the amount of principal set.
+
+### Exitable Withdrawal Recipient
+
+A much awaited feature for proof of stake Ethereum is the ability to trigger the exit of a validator with only the withdrawal address. This is tracked in [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002). Support for this feature will be inheritable in all other withdrawal recipient contracts. This will mitigate the risk to a principal provider of funds being stuck, or a validator being irrecoverably offline.
+
+## Split Contracts
+
+A split, or splitter, is a set of contracts that can divide ether or an ERC20 across a number of addresses. Splits are used in conjunction with withdrawal recipients. Execution Layer rewards for a DV are directed to a split address through the use of a `fee recipient` address. Splits can be either immutable, or mutable by way of an admin address capable of updating them.
+
+Further information about splits can be found on the 0xSplits team's [docs site](https://docs.0xsplits.xyz/).
+
+## Split Controllers
+
+Splits can be completely edited through the use of the `controller` address, however, total editability of a split is not always wanted. A permissive controller and a restrictive controller are given as examples below.
+
+### (Gnosis) SAFE wallet
+
+A [SAFE](https://safe.global/) is a common method to administrate a mutable split. The most well-known deployment of this pattern is the [protocol guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html). The SAFE can arbitrarily update the split to any set of addresses with any valid set of percentages.
+
+### Immutable Split Controller
+
+This is a contract that updates one split configuration with another, exactly once. Only a permissioned address can trigger the change. This contract is suitable for changing a split at an unknown point in future to a configuration pre-defined at deployment.
diff --git a/docs/versioned_docs/version-v0.17.1/sc/README.md b/docs/versioned_docs/version-v0.17.1/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.17.1/sec/README.md b/docs/versioned_docs/version-v0.17.1/sec/README.md
new file mode 100644
index 0000000000..aeb3b02cce
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sec/README.md
@@ -0,0 +1,2 @@
+# sec
+
diff --git a/docs/versioned_docs/version-v0.17.1/sec/bug-bounty.md b/docs/versioned_docs/version-v0.17.1/sec/bug-bounty.md
new file mode 100644
index 0000000000..48c52d89b4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sec/bug-bounty.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 2
+description: Bug Bounty Policy
+---
+
+# Obol Bug Bounty
+
+## Overview
+
+Obol Labs is committed to ensuring the security of our distributed validator software and services. As part of our commitment to security, we have established a bug bounty program to encourage security researchers to report vulnerabilities in our software and services to us so that we can quickly address them.
+
+## Eligibility
+
+To participate in the Bug Bounty Program you must:
+
+- Not be a resident of any country that does not allow participation in these types of programs
+- Be at least 14 years old and have legal capacity to agree to these terms and participate in the Bug Bounty Program
+- Have permission from your employer to participate
+- Not be (for the previous 12 months) an Obol Labs employee, immediate family member of an Obol employee, Obol contractor, or Obol service provider.
+
+## Scope
+
+The bug bounty program applies to software and services that are built by Obol. Only submissions under the following domains are eligible for rewards:
+
+- Charon DVT Middleware
+- DV Launchpad
+- Obol’s Public API
+- Obol’s Smart Contracts and the contracts they depend on.
+- Obol’s Public Relay
+
+Additionally, all vulnerabilities that require or are related to the following are out of scope:
+
+- Social engineering
+- Rate Limiting (Non-critical issues)
+- Physical security
+- Non-security-impacting UX issues
+- Vulnerabilities or weaknesses in third party applications that integrate with Obol
+- The Obol website or the Obol infrastructure in general is NOT part of this bug bounty program.
+
+## Rules
+
+- Bug has not been publicly disclosed
+- Vulnerabilities that have been previously submitted by another contributor or already known by the Obol development team are not eligible for rewards
+- The size of the bounty payout depends on the assessment of the severity of the exploit. Please refer to the rewards section below for additional details
+- Bugs must be reproducible in order for us to verify the vulnerability. Submissions with a working proof of concept is necessary
+- Rewards and the validity of bugs are determined by the Obol security team and any payouts are made at their sole discretion
+- Terms and conditions of the Bug Bounty program can be changed at any time at the discretion of Obol
+- Details of any valid bugs may be shared with complementary protocols utilised in the Obol ecosystem in order to promote ecosystem cohesion and safety.
+
+## Rewards
+
+The rewards for participating in our bug bounty program will be based on the severity and impact of the vulnerability discovered. We will evaluate each submission on a case-by-case basis, and the rewards will be at Obol’s sole discretion.
+
+### Low: up to $500
+
+A Low-level vulnerability is one that has a limited impact and can be easily fixed. Unlikely to have a meaningful impact on availability, integrity, and/or loss of funds.
+
+- Low impact, medium likelihood
+- Medium impact, low likelihood
+ Examples:
+- Attacker can sometimes put a charon node in a state that causes it to drop one out of every one hundred attestations made by a validator
+
+### Medium: up to $1,000
+
+A Medium-level vulnerability is one that has a moderate impact and requires a more significant effort to fix. Possible to have an impact on validator availability, integrity, and/or loss of funds.
+
+- High impact, low likelihood
+- Medium impact, medium likelihood
+- Low impact, high likelihood
+ Examples:
+- Attacker can successfully conduct eclipse attacks on the cluster nodes with peer-ids with 4 leading zero bytes.
+
+### High: up to $4,000
+
+A High-level vulnerability is one that has a significant impact on the security of the system and requires a significant effort to fix. Likely to have impact on availability, integrity, and/or loss of funds.
+
+- High impact, medium likelihood
+- Medium impact, high likelihood
+ Examples:
+- Attacker can successfully partition the cluster and keep the cluster offline.
+
+### Critical: up to $10,000
+
+A Critical-level vulnerability is one that has a severe impact on the security of the in-production system and requires immediate attention to fix. Highly likely to have a material impact on availability, integrity, and/or loss of funds.
+
+- High impact, high likelihood
+ Examples:
+- Attacker can successfully conduct remote code execution in charon client to exfiltrate BLS private key material.
+
+We may offer rewards in the form of cash, merchandise, or recognition. We will only award one reward per vulnerability discovered, and we reserve the right to deny a reward if we determine that the researcher has violated the terms and conditions of this policy.
+
+## Submission process
+
+Please email security@obol.tech
+
+Your report should include the following information:
+
+- Description of the vulnerability and its potential impact
+- Steps to reproduce the vulnerability
+- Proof of concept code, screenshots, or other supporting documentation
+- Your name, email address, and any contact information you would like to provide.
+ Reports that do not include sufficient detail will not be eligible for rewards.
+
+## Disclosure Policy
+
+Obol Labs will disclose the details of the vulnerability and the researcher’s identity (with their consent) only after we have remediated the vulnerability and issued a fix. Researchers must keep the details of the vulnerability confidential until Obol Labs has acknowledged and remediated the issue.
+
+## Legal Compliance
+
+All participants in the bug bounty program must comply with all applicable laws, regulations, and policy terms and conditions. Obol will not be held liable for any unlawful or unauthorised activities performed by participants in the bug bounty program.
+
+We will not take any legal action against security researchers who discover and report security vulnerabilities in accordance with this bug bounty policy. We do, however, reserve the right to take legal action against anyone who violates the terms and conditions of this policy.
+
+## Non-Disclosure Agreement
+
+All participants in the bug bounty program will be required to sign a non-disclosure agreement (NDA) before they are given access to closed source software and services for testing purposes.
diff --git a/docs/versioned_docs/version-v0.17.1/sec/contact.md b/docs/versioned_docs/version-v0.17.1/sec/contact.md
new file mode 100644
index 0000000000..e66e1663e2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sec/contact.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 3
+description: Security details for the Obol Network
+---
+
+# Contacts
+
+Please email security@obol.tech to report a security incident, vulnerability, bug or inquire about Obol's security.
+
+Also, visit the [obol security repo](https://github.com/ObolNetwork/obol-security) for more details.
diff --git a/docs/versioned_docs/version-v0.17.1/sec/ev-assessment.md b/docs/versioned_docs/version-v0.17.1/sec/ev-assessment.md
new file mode 100644
index 0000000000..1e7c6e4f0b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sec/ev-assessment.md
@@ -0,0 +1,295 @@
+---
+sidebar_position: 4
+description: Software Development Security Assessment
+---
+
+# ev-assessment
+
+## Software Development at Obol
+
+When hardening a projects technical security, team member's operational security, and the security of the software development practices in use by the team are some of the most criticial areas to secure. Many hacks and compromises in the space to date have been a result of these attack vectors rather than exploits of the software itself.
+
+With this in mind, in January 2023 the Obol team retained the expertise of Ethereal Venture's security researcher Alex Wade; to interview key stakeholders and produce a report into the teams Software Development Lifecycle.
+
+The below page is a result of the report that was produced. What is present here has had some sensitive information redacted, and contains responses to the recommendations made, detailing the actions the Obol team have taken to mitigate what has been highlighted.
+
+## Obol Report
+
+**Prepared by: Alex Wade (Ethereal Ventures)** **Date: Jan 2023**
+
+Over the past month, I worked with Obol to review their software development practices in preparation for their upcoming security audits. My goals were to review and analyze:
+
+* Software development processes
+* Vulnerability disclosure and escalation procedures
+* Key personnel risk
+
+The information in this report was collected through a series of interviews with Obol’s project leads.
+
+### Contents:
+
+* Background Info
+* Analysis - Cluster Setup and DKG
+ * Key Risks
+ * Potential Attack Scenarios
+* Recommendations
+ * R1: Users should deploy cluster contracts through a known on-chain entry point
+ * R2: Users should deposit to the beacon chain through a pool contract
+ * R3: Raise the barrier to entry to push an update to the Launchpad
+* Additional Notes
+ * Vulnerability Disclosure
+ * Key Personnel Risk
+
+### Background Info
+
+**Each team lead was asked to describe Obol in terms of its goals, objectives, and key features.**
+
+#### What is Obol?
+
+Obol builds DVT (Distributed Validator Technology) for Ethereum.
+
+#### What is Obol’s goal?
+
+Obol’s goal is to solve a classic distributed systems problem: uptime.
+
+Rather than requiring Ethereum validators to stake on their own, Obol allows groups of operators to stake together. Using Obol, a single validator can be run cooperatively by multiple people across multiple machines.
+
+In theory, this architecture provides validators with some redundancy against common issues: server and power outages, client failures, and more.
+
+#### What are Obol’s objectives?
+
+Obol’s business objective is to provide base-layer infrastructure to support a distributed validator ecosystem. As Obol provides base layer technology, other companies and projects will build on top of Obol.
+
+Obol’s business model is to eventually capture a portion of the revenue generated by validators that use Obol infrastructure.
+
+#### What is Obol’s product?
+
+Obol’s product consists of three main components, each run by its own team: a webapp, a client, and smart contracts.
+
+* [DV Launchpad](../dvl/intro.md): A webapp to create and manage distributed validators.
+* [Charon](../charon/intro.md): A middleware client that enables operators to run distributed validators.
+* [Solidity](../sc/01_introducing-obol-managers.md): Withdrawal and fee recipient contracts for use with distributed validators.
+
+### Analysis - Cluster Setup and DKG
+
+The Launchpad guides users through the process of creating a cluster, which defines important parameters like the validator’s fee recipient and withdrawal addresses, as well as the identities of the operators in the cluster. In order to ensure their cluster configuration is correct, users need to rely on a few different factors.
+
+**First, users need to trust the Charon client** to perform the DKG correctly, and validate things like:
+
+* Config file is well-formed and is using the expected version
+* Signatures and ENRs from other operators are valid
+* Cluster config hash is correct
+* DKG succeeds in producing valid signatures
+* Deposit data is well-formed and is correctly generated from the cluster config and DKG.
+
+However, Charon’s validation is limited to the digital: signature checks, cluster file syntax, etc. It does NOT help would-be operators determine whether the other operators listed in their cluster definition are the real people with whom they intend to start a DVT cluster. So -
+
+**Second, users need to come to social consensus with fellow operators.** While the cluster is being set up, it’s important that each operator is an active participant. Each member of the group must validate and confirm that:
+
+* the cluster file correctly reflects their address and node identity, and reflects the information they received from fellow operators
+* the cluster parameters are expected – namely, the number of validators and signing threshold
+
+**Finally, users need to perform independent validation.** Each user should perform their own validation of the cluster definition:
+
+* Is my information correct? (address and ENR)
+* Does the information I received from the group match the cluster definition?
+* Is the ETH2 deposit data correct, and does it match the information in the cluster definition?
+* Are the withdrawal and fee recipient addresses correct?
+
+These final steps are potentially the most difficult, and may require significant technical knowledge.
+
+### Key Risks
+
+#### 1. Validation of Contract Deployment and Deposit Data Relies Heavily on Launchpad
+
+From my interviews, it seems that the user deploys both the withdrawal and fee recipient contracts through the Launchpad.
+
+What I’m picturing is that during the first parts of the cluster setup process, the user is prompted to sign one or more transactions deploying the withdrawal and fee recipient contracts to mainnet. The Launchpad apparently uses an npm package to deploy these contracts: `0xsplits/splits-sdk`, which I assume provides either JSON artifacts or a factory address on chain. The Launchpad then places the deployed contracts into the cluster config file, and the process moves on.
+
+If an attacker has published a malicious update to the Launchpad (or compromised an underlying dependency), the contracts deployed by the Launchpad may be malicious. The questions I’d like to pose are:
+
+* How does the group creator know the Launchpad deployed the correct contracts?
+* How does the rest of the group know the creator deployed the contracts through the Launchpad?
+
+My understanding is that this ultimately comes down to the independent verification that each of the group’s members performs during and after the cluster’s setup phase.
+
+At its worst, this verification might consist solely of the cluster creator confirming to the others that, yes, those addresses match the contracts I deployed through the Launchpad.
+
+A more sophisticated user might verify that not only do the addresses match, but the deployed source code looks roughly correct. However, this step is far out of the realm of many would-be validators. To be really certain that the source code is correct would require auditor-level knowledge.
+
+The risk is that:
+
+* the deployed contracts are NOT the correctly-configured 0xsplits waterfall/fee splitter contracts
+* most users are ill-equipped to make this determination themselves
+* we don’t want to trust the Launchpad as the single source of truth
+
+In the worst case, the cluster may end up depositing with malicious withdrawal or fee recipient credentials. If unnoticed, this may net an attacker the entire withdrawal amount, once the cluster exits.
+
+Note that the same (or similar) risks apply to validation of deposit data, which has the potential to be similarly difficult. I’m a little fuzzy on which part of the Obol stack actually generates the deposit data / deposit transaction, so I can’t speak to this as much. However, I think the mitigation for both of these is roughly the same - read on!
+
+**Mitigation:**
+
+It’s certainly a good idea to make it harder to deploy malicious updates to the Launchpad, but this may not be entirely possible. A higher-yield strategy may be to educate and empower users to perform independent validation of the DVT setup process - without relying on information fed to them by Charon and the Launchpad.
+
+I’ve outlined some ideas for this in #R1 and #R2.
+
+#### 2. Social Consensus, aka “Who sends the 32 ETH?”
+
+Depositing to the beacon chain requires a total of 32 ETH. Obol’s product allows multiple operators to act as a single validator together, which means would-be operators need to agree on how to fund the 32 ETH needed to initiate the deposit.
+
+It is my understanding that currently, this process comes down to trust and loose social consensus. Essentially, the group needs to decide who chips in what amount together, and then trust someone to take the 32 ETH and complete the deposit process correctly (without running away with the money).
+
+Granted, the initial launch of Obol will be open only to a small group of people as the kinks in the system get worked out - but in preparation for an eventual public release, the deposit process needs to be much simpler and far less reliant on trust.
+
+Mitigation: See #R2.
+
+**Potential Attack Scenarios**
+
+During the interview process, I learned that each of Obol’s core components has its own GitHub repo, and that each repo has roughly the same structure in terms of organization and security policies. For each repository:
+
+* There are two overall github organization administrators, and a number of people have administrative control over individual repositories.
+* In order to merge PRs, the submitter needs:
+ * CI/CD checks to pass
+ * Review from one person (anyone at Obol)
+
+Of course, admin access also means the ability to change these settings - so repo admins could theoretically merge PRs without needing checks to pass, and without review/approval, organization admins can control the full GitHub organization.
+
+The following scenarios describe the impact an attack may have.
+
+**1. Publishing a malicious version of the Launchpad, or compromising an underlying dependency**
+
+* Reward: High
+* Difficulty: Medium-Low
+
+As described in Key Risks, publishing a malicious version of the Launchpad has the potential to net the largest payout for an attacker. By tampering with the cluster’s deposit data or withdrawal/fee recipient contracts, an attacker stands to gain 32 ETH or more per compromised cluster.
+
+During the interviews, I learned that merging PRs to main in the Launchpad repo triggers an action that publishes to the site. Given that merges can be performed by an authorized Obol developer, this makes the developers prime targets for social engineering attacks.
+
+Additionally, the use of the `0xsplits/splits-sdk` NPM package to aid in contract deployment may represent a supply chain attack vector. It may be that this applies to other Launchpad dependencies as well.
+
+In any case, with a fairly large surface area and high potential reward, this scenario represents a credible risk to users during the cluster setup and DKG process.
+
+See #R1, #R2, and #R3 for some ideas to address this scenario.
+
+**2. Publishing a malicious version of Charon to new operators**
+
+* Reward: Medium
+* Difficulty: High
+
+During the cluster setup process, Charon is responsible both for validating the cluster configuration produced by the Launchpad, as well as performing a DKG ceremony between a group’s operators.
+
+If new operators use a malicious version of Charon to perform this process, it may be possible to tamper with both of these responsibilities, or even get access to part or all of the underlying validator private key created during DKG.
+
+However, the difficulty of this type of attack seems quite high. An attacker would first need to carry out the same type of social engineering attack described in scenario 1 to publish and tag a new version of Charon. Crucially, users would also need to install the malicious version - unlike the Launchpad, an update here is not pushed directly to users.
+
+As long as Obol is clear and consistent with communication around releases and versioning, it seems unlikely that a user would both install a brand-new, unannounced release, and finish the cluster setup process before being warned about the attack.
+
+**3. Publishing a malicious version of Charon to existing validators**
+
+* Reward: Low
+* Difficulty: High
+
+Once a distributed validator is up and running, much of the danger has passed. As a middleware client, Charon sits between a validator’s consensus and validator clients. As such, it shouldn’t have direct access to a validator’s withdrawal keys nor signing keys.
+
+If existing validators update to a malicious version of Charon, it’s likely the worst thing an attacker could theoretically do is slash the validator, however, assuming charon has no access to any private keys, this would be predicated on one or more validator clients connected to charon also failing to prevent the signing of a slashable message. In practice, a compromised charon client is more likely to pose liveness risks than safety risks.
+
+This is not likely to be particularly motivating to potential attackers - and paired with the high difficulty described above, this scenario seems unlikely to cause significant issues.
+
+### Recommendations
+
+#### R1: Users should deploy cluster contracts through a known on-chain entry point
+
+During setup, users should only sign one transaction via the Launchpad - to a contract located at an Obol-held ENS (e.g. `launchpad.obol.eth`). This contract should deploy everything needed for the cluster to operate, like the withdrawal and fee recipient contracts. It should also initialize them with the provided reward split configuration (and any other config needed).
+
+Rather than using an NPM library to supply a factory address or JSON artifacts, this has the benefit of being both:
+
+* **Harder to compromise:** as long as the user knows launchpad.obol.eth, it’s pretty difficult to trick them into deploying the wrong contracts.
+* **Easier to validate** for non-technical users: the Obol contract can be queried for deployment information via etherscan. For example:
+
+
+
+Note that in order for this to be successful, Obol needs to provide detailed steps for users to perform manual validation of their cluster setups. Users should be able to treat this as a “checklist:”
+
+* Did I send a transaction to `launchpad.obol.eth`?
+* Can I use the ENS name to locate and query the deployment manager contract on etherscan?
+* If I input my address, does etherscan report the configuration I was expecting?
+ * withdrawal address matches
+ * fee recipient address matches
+ * reward split configuration matches
+
+As long as these steps are plastered all over the place (i.e. not just on the Launchpad) and Obol puts in effort to educate users about the process, this approach should allow users to validate cluster configurations themselves - regardless of Launchpad or NPM package compromise.
+
+**Obol’s response:**
+
+Roadmapped: add the ability for the OWR factory to claim and transfer its reverse resolution ownership.
+
+#### R2: Users should deposit to the beacon chain through a pool contract
+
+Once cluster setup and DKG is complete, a group of operators should deposit to the beacon chain by way of a pool contract. The pool contract should:
+
+* Accept Eth from any of the group’s operators
+* Stop accepting Eth when the contract’s balance hits (32 ETH \* number of validators)
+* Make it easy to pull the trigger and deposit to the beacon chain once the critical balance has been reached
+* Offer all of the group’s operators a “bail” option at any point before the deposit is triggered
+
+Ideally, this contract is deployed during the setup process described in #R1, as another step toward allowing users to perform independent validation of the process.
+
+Rather than relying on social consensus, this should:
+
+* Allow operators to fund the validator without needing to trust any single party
+* Make it harder to mess up the deposit or send funds to some malicious actor, as the pool contract should know what the beacon deposit contract address is
+
+**Obol’s response:**
+
+Roadmapped: give the operators a streamlined, secure way to deposit Ether (ETH) to the beacon chain collectively, satisfying specific conditions:
+
+* Pooling from multiple operators.
+* Ceasing to accept ETH once a critical balance is reached, defined by 32 ETH multiplied by the number of validators.
+* Facilitating an immediate deposit to the beacon chain once the target balance is reached.
+* Provide a 'bail-out' option for operators to withdraw their contribution before initiating the group's deposit to the beacon chain.
+
+#### R3: Raise the barrier to entry to push an update to the Launchpad
+
+Currently, any repo admin can publish an update to the Launchpad unchecked.
+
+Given the risks and scenarios outlined above, consider amending this process so that the sole compromise of either admin is not sufficient to publish to the Launchpad site. It may be worthwhile to require both admins to approve publishing to the site.
+
+Along with simply adding additional prerequisites to publish an update to the Launchpad, ensure that both admins have enabled some level of multi-factor authentication on their GitHub accounts.
+
+**Obol’s response:**
+
+We removed individual’s ability to merge changes without review, enforced MFA, signed commits, and employed Bulldozer bot to make sure a PR gets merged automatically when all checks pass.
+
+### Additional Notes
+
+#### Vulnerability Disclosure
+
+During the interviews, I got some conflicting information when asking about Obol’s vulnerability disclosure process.
+
+Some interviewees directed me towards Obol’s security repo, which details security contacts: [ObolNetwork/obol-security](https://github.com/ObolNetwork/obol-security), while some answered that disclosure should happen primarily through Immunefi. While these may both be part of the correct answer, it seems that Obol’s disclosure process may not be as well-defined as it could be. Here are some notes:
+
+* I wasn’t able to find information about Obol on Immunefi. I also didn’t find any reference to a security contact or disclosure policy in Obol’s docs.
+* When looking into the obol security repo, I noticed broken links in a few of the sections in README.md and SECURITY.md:
+ * Security policy
+ * More Information
+* Some of the text and links in the Bug Bounty Program don’t seem to apply to Obol (see text referring to Vaults and Strategies).
+* The Receiving Disclosures section does not include a public key with which submitters can encrypt vulnerability information.
+
+It’s my understanding that these items are probably lower priority due to Obol’s initial closed launch - but these should be squared away soon! \[Obol response to latest vuln disclosure process goes here]
+
+**Obol’s response:**
+
+we addressed all of the concerns in the obol-security repository:
+
+1. The security policy link has been fixed
+2. The Bug Bounty program received an overhaul and clearly states rewards, eligibility, and scope
+3. We list two GPG public keys for which we accept encrypted vulnerabilities reports.
+
+We are actively working towards integrating Immunefi in our security pipeline.
+
+#### Key Personnel Risk
+
+A final section on the specifics of key personnel risk faced by Obol has been redacted from the original report. Particular areas of control highlighted were github org ownership and domain name control.
+
+**Obol’s response:**
+
+These risks have been mitigated by adding an extra admin to the github org, and by setting up a second DNS stack in case the primary one fails, along with general Opsec improvements.
diff --git a/docs/versioned_docs/version-v0.17.1/sec/overview.md b/docs/versioned_docs/version-v0.17.1/sec/overview.md
new file mode 100644
index 0000000000..cca3e28f7b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sec/overview.md
@@ -0,0 +1,33 @@
+---
+sidebar_position: 1
+description: Security Overview
+---
+
+# Overview
+
+This page serves as an overview of the Obol Network from a security point of view.
+
+This page is updated quarterly. The last update was on 2023-10-01.
+
+## Table of Contents
+
+1. [List of Security Audits and Assessments](overview.md#list-of-security-audits-and-assessments)
+2. [Security Focused Documents](overview.md#security-focused-documents)
+3. [Bug Bounty Details](bug-bounty.md)
+
+## List of Security Audits and Assessments
+
+The completed audits reports are linked [here](https://github.com/ObolNetwork/obol-security/tree/main/audits).
+
+* A review of Obol Labs [development processes](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/sec/ev-assessment/README.md) by Ethereal Ventures
+* A [security assessment](https://github.com/ObolNetwork/obol-security/blob/f9d7b0ad0bb8897f74ccb34cd4bd83012ad1d2b5/audits/Sigma_Prime_Obol_Network_Charon_Security_Assessment_Report_v2_1.pdf) of Charon by [Sigma Prime](https://sigmaprime.io/).
+* A [solidity audit](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/sec/smart_contract_audit/README.md) of the Obol Manager Contracts by [Zach Obront](https://zachobront.com/).
+* A second audit of Charon is planned for Q4 2023.
+
+## Security focused documents
+
+* A [threat model](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.17.1/sec/threat_model/README.md) for a DV middleware client like charon.
+
+## Bug Bounty
+
+Information related to disclosing bugs and vulnerabilities to Obol can be found on [the next page](bug-bounty.md).
diff --git a/docs/versioned_docs/version-v0.17.1/sec/smart_contract_audit.md b/docs/versioned_docs/version-v0.17.1/sec/smart_contract_audit.md
new file mode 100644
index 0000000000..310f843be2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sec/smart_contract_audit.md
@@ -0,0 +1,477 @@
+---
+sidebar_position: 5
+description: Smart Contract Audit
+---
+
+# Smart Contract Audit
+
+| | |
+| ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|  | Obol Audit Report
Obol Manager Contracts
Prepared by: Zach Obront, Independent Security Researcher
Date: Sept 18 to 22, 2023
|
+
+## About **Obol**
+
+The Obol Network is an ecosystem for trust minimized staking that enables people to create, test, run & co-ordinate distributed validators.
+
+The Obol Manager contracts are responsible for distributing validator rewards and withdrawals among the validator and node operators involved in a distributed validator.
+
+## About **zachobront**
+
+Zach Obront is an independent smart contract security researcher. He serves as a Lead Senior Watson at Sherlock, a Security Researcher at Spearbit, and has identified multiple critical severity bugs in the wild, including in a Top 5 Protocol on Immunefi. You can say hi on Twitter at [@zachobront](http://twitter.com/zachobront).
+
+## Summary & Scope
+
+The [ObolNetwork/obol-manager-contracts](https://github.com/ObolNetwork/obol-manager-contracts/) repository was audited at commit [50ce277919723c80b96f6353fa8d1f8facda6e0e](https://github.com/ObolNetwork/obol-manager-contracts/tree/50ce277919723c80b96f6353fa8d1f8facda6e0e).
+
+The following contracts were in scope:
+
+* src/controllers/ImmutableSplitController.sol
+* src/controllers/ImmutableSplitControllerFactory.sol
+* src/lido/LidoSplit.sol
+* src/lido/LidoSplitFactory.sol
+* src/owr/OptimisticWithdrawalReceiver.sol
+* src/owr/OptimisticWithdrawalReceiverFactory.sol
+
+After completion of the fixes, the [2f4f059bfd145f5f05d794948c918d65d222c3a9](https://github.com/ObolNetwork/obol-manager-contracts/tree/2f4f059bfd145f5f05d794948c918d65d222c3a9) commit was reviewed. After this review, the updated Lido fee share system in [PR #96](https://github.com/ObolNetwork/obol-manager-contracts/pull/96/files) was reviewed.
+
+## Summary of Findings
+
+| Identifier | Title | Severity | Fixed |
+| :-----------------------------------------------------------------------------------------------------------------------: | -------------------------------------------------------------------------------------- | :-----------: | :---: |
+| [M-01](smart_contract_audit.md#m-01-future-fees-may-be-skirted-by-setting-a-non-eth-reward-token) | Future fees may be skirted by setting a non-ETH reward token | Medium | ✓ |
+| [M-02](smart_contract_audit.md#m-02-splits-with-256-or-more-node-operators-will-not-be-able-to-switch-on-fees) | Splits with 256 or more node operators will not be able to switch on fees | Medium | ✓ |
+| [M-03](smart_contract_audit.md#m-03-in-a-mass-slashing-event-node-operators-are-incentivized-to-get-slashed) | In a mass slashing event, node operators are incentivized to get slashed | Medium | |
+| [L-01](smart_contract_audit.md#l-01-obol-fees-will-be-applied-retroactively-to-all-non-distributed-funds-in-the-splitter) | Obol fees will be applied retroactively to all non-distributed funds in the Splitter | Low | ✓ |
+| [L-02](smart_contract_audit.md#l-02-if-owr-is-used-with-rebase-tokens-and-theres-a-negative-rebase-principal-can-be-lost) | If OWR is used with rebase tokens and there's a negative rebase, principal can be lost | Low | ✓ |
+| [L-03](smart_contract_audit.md#l-03-lidosplit-can-receive-eth-which-will-be-locked-in-contract) | LidoSplit can receive ETH, which will be locked in contract | Low | ✓ |
+| [L-04](smart_contract_audit.md#l-04-upgrade-to-latest-version-of-solady-to-fix-libclone-bug) | Upgrade to latest version of Solady to fix LibClone bug | Low | ✓ |
+| [G-01](smart_contract_audit.md#g-01-steth-and-wsteth-addresses-can-be-saved-on-implementation-to-save-gas) | stETH and wstETH addresses can be saved on implementation to save gas | Gas | ✓ |
+| [G-02](smart_contract_audit.md#g-02-owr-can-be-simplified-and-save-gas-by-not-tracking-distributedfunds) | OWR can be simplified and save gas by not tracking distributedFunds | Gas | ✓ |
+| [I-01](smart_contract_audit.md#i-01-strong-trust-assumptions-between-validators-and-node-operators) | Strong trust assumptions between validators and node operators | Informational | |
+| [I-02](smart_contract_audit.md#i-02-provide-node-operator-checklist-to-validate-setup) | Provide node operator checklist to validate setup | Informational | |
+
+## Detailed Findings
+
+### \[M-01] Future fees may be skirted by setting a non-ETH reward token
+
+Fees are planned to be implemented on the `rewardRecipient` splitter by updating to a new fee structure using the `ImmutableSplitController`.
+
+It is assumed that all rewards will flow through the splitter, because (a) all distributed rewards less than 16 ETH are sent to the `rewardRecipient`, and (b) even if a team waited for rewards to be greater than 16 ETH, rewards sent to the `principalRecipient` are capped at the `amountOfPrincipalStake`.
+
+This creates a fairly strong guarantee that reward funds will flow to the `rewardRecipient`. Even if a user were to set their `amountOfPrincipalStake` high enough that the `principalRecipient` could receive unlimited funds, the Obol team could call `distributeFunds()` when the balance got near 16 ETH to ensure fees were paid.
+
+However, if the user selects a non-ETH token, all ETH will be withdrawable only thorugh the `recoverFunds()` function. If they set up a split with their node operators as their `recoveryAddress`, all funds will be withdrawable via `recoverFunds()` without ever touching the `rewardRecipient` or paying a fee.
+
+#### Recommendation
+
+I would recommend removing the ability to use a non-ETH token from the `OptimisticWithdrawalRecipient`. Alternatively, if it feels like it may be a use case that is needed, it may make sense to always include ETH as a valid token, in addition to any `OWRToken` set.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[M-02] Splits with 256 or more node operators will not be able to switch on fees
+
+0xSplits is used to distribute rewards across node operators. All Splits are deployed with an ImmutableSplitController, which is given permissions to update the split one time to add a fee for Obol at a future date.
+
+The Factory deploys these controllers as Clones with Immutable Args, hard coding the `owner`, `accounts`, `percentAllocations`, and `distributorFee` for the future update. This data is packed as follows:
+
+```solidity
+ function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+ ) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
+ uint256[] memory recipients = new uint[](recipientsSize);
+
+ uint256 i = 0;
+ for (; i < recipientsSize;) {
+ recipients[i] = (uint256(percentAllocations[i]) << ADDRESS_BITS) | uint256(uint160(accounts[i]));
+
+ unchecked {
+ i++;
+ }
+ }
+
+ data = abi.encodePacked(splitMain, distributorFee, owner, uint8(recipientsSize), recipients);
+ }
+```
+
+In the process, `recipientsSize` is unsafely downcasted into a `uint8`, which has a maximum value of `256`. As a result, any values greater than 256 will overflow and result in a lower value of `recipients.length % 256` being passed as `recipientsSize`.
+
+When the Controller is deployed, the full list of `percentAllocations` is passed to the `validSplit` check, which will pass as expected. However, later, when `updateSplit()` is called, the `getNewSplitConfiguation()` function will only return the first `recipientsSize` accounts, ignoring the rest.
+
+```solidity
+ function getNewSplitConfiguration()
+ public
+ pure
+ returns (address[] memory accounts, uint32[] memory percentAllocations)
+ {
+ // fetch the size first
+ // then parse the data gradually
+ uint256 size = _recipientsSize();
+ accounts = new address[](size);
+ percentAllocations = new uint32[](size);
+
+ uint256 i = 0;
+ for (; i < size;) {
+ uint256 recipient = _getRecipient(i);
+ accounts[i] = address(uint160(recipient));
+ percentAllocations[i] = uint32(recipient >> ADDRESS_BITS);
+ unchecked {
+ i++;
+ }
+ }
+ }
+```
+
+When `updateSplit()` is eventually called on `splitsMain` to turn on fees, the `validSplit()` check on that contract will revert because the sum of the percent allocations will no longer sum to `1e6`, and the update will not be possible.
+
+#### Proof of Concept
+
+The following test can be dropped into a file in `src/test` to demonstrate that passing 400 accounts will result in a `recipientSize` of `400 - 256 = 144`:
+
+```solidity
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.0;
+
+import { Test } from "forge-std/Test.sol";
+import { console } from "forge-std/console.sol";
+import { ImmutableSplitControllerFactory } from "src/controllers/ImmutableSplitControllerFactory.sol";
+import { ImmutableSplitController } from "src/controllers/ImmutableSplitController.sol";
+
+interface ISplitsMain {
+ function createSplit(address[] calldata accounts, uint32[] calldata percentAllocations, uint32 distributorFee, address controller) external returns (address);
+}
+
+contract ZachTest is Test {
+ function testZach_RecipientSizeCappedAt256Accounts() public {
+ vm.createSelectFork("https://mainnet.infura.io/v3/fb419f740b7e401bad5bec77d0d285a5");
+
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](400);
+ uint32[] memory bigPercentAllocations = new uint32[](400);
+
+ for (uint i = 0; i < 400; i++) {
+ bigAccounts[i] = address(uint160(i));
+ bigPercentAllocations[i] = 2500;
+ }
+
+ // confirmation that 0xSplits will allow creating a split with this many accounts
+ // dummy acct passed as controller, but doesn't matter for these purposes
+ address split = ISplitsMain(0x2ed6c4B5dA6378c7897AC67Ba9e43102Feb694EE).createSplit(bigAccounts, bigPercentAllocations, 0, address(8888));
+
+ ImmutableSplitController controller = factory.createController(split, owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+
+ // added a public function to controller to read recipient size directly
+ uint savedRecipientSize = controller.ZachTest__recipientSize();
+ assert(savedRecipientSize < 400);
+ console.log(savedRecipientSize); // 144
+ }
+}
+```
+
+#### Recommendation
+
+When packing the data in `_packSplitControllerData()`, check `recipientsSize` before downcasting to a uint8:
+
+```diff
+function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
++ if (recipientsSize > 256) revert InvalidSplit__TooManyAccounts(recipientSize);
+ ...
+}
+```
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[M-03] In a mass slashing event, node operators are incentivized to get slashed
+
+When the `OptimisticWithdrawalRecipient` receives funds from the beacon chain, it uses the following rule to determine the allocation:
+
+> If the amount of funds to be distributed is greater than or equal to 16 ether, it is assumed that it is a withdrawal (to be returned to the principal, with a cap on principal withdrawals of the total amount they deposited).
+
+> Otherwise, it is assumed that the funds are rewards.
+
+This value being as low as 16 ether protects against any predictable attack the node operator could perform. For example, due to the effect of hysteresis in updating effective balances, it does not seem to be possible for node operators to predictably bleed a withdrawal down to be below 16 ether (even if they timed a slashing perfectly).
+
+However, in the event of a mass slashing event, slashing punishments can be much more severe than they otherwise would be. To calculate the size of a slash, we:
+
+* take the total percentage of validator stake slashed in the 18 days preceding and following a user's slash
+* multiply this percentage by 3 (capped at 100%)
+* the full slashing penalty for a given validator equals 1/32 of their stake, plus the resulting percentage above applied to the remaining 31/32 of their stake
+
+In order for such penalties to bring the withdrawal balance below 16 ether (assuming a full 32 ether to start), we would need the percentage taken to be greater than `15 / 31 = 48.3%`, which implies that `48.3 / 3 = 16.1%` of validators would need to be slashed.
+
+Because the measurement is taken from the 18 days before and after the incident, node operators would have the opportunity to see a mass slashing event unfold, and later decide that they would like to be slashed along with it.
+
+In the event that they observed that greater than 16.1% of validators were slashed, Obol node operators would be able to get themselves slashed, be exited with a withdrawal of less than 16 ether, and claim that withdrawal as rewards, effectively stealing from the principal recipient.
+
+#### Recommendations
+
+Find a solution that provides a higher level of guarantee that the funds withdrawn are actually rewards, and not a withdrawal.
+
+#### Review
+
+Acknowledged. We believe this is a black swan event. It would require a major ETH client to be compromised, and would be a betrayal of trust, so likely not EV+ for doxxed operators. Users of this contract with unknown operators should be wary of such a risk.
+
+### \[L-01] Obol fees will be applied retroactively to all non-distributed funds in the Splitter
+
+When Obol decides to turn on fees, a call will be made to `ImmutableSplitController::updateSplit()`, which will take the predefined split parameters (the original user specified split with Obol's fees added in) and call `updateSplit()` to implement the change.
+
+```solidity
+function updateSplit() external payable {
+ if (msg.sender != owner()) revert Unauthorized();
+
+ (address[] memory accounts, uint32[] memory percentAllocations) = getNewSplitConfiguration();
+
+ ISplitMain(splitMain()).updateSplit(split, accounts, percentAllocations, uint32(distributorFee()));
+}
+```
+
+If we look at the code on `SplitsMain`, we can see that this `updateSplit()` function is applied retroactively to all funds that are already in the split, because it updates the parameters without performing a distribution first:
+
+```solidity
+function updateSplit(
+ address split,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+)
+ external
+ override
+ onlySplitController(split)
+ validSplit(accounts, percentAllocations, distributorFee)
+{
+ _updateSplit(split, accounts, percentAllocations, distributorFee);
+}
+```
+
+This means that any funds that have been sent to the split but have not yet be distributed will be subject to the Obol fee. Since these splitters will be accumulating all execution layer fees, it is possible that some of them may have received large MEV bribes, where this after-the-fact fee could be quite expensive.
+
+#### Recommendation
+
+The most strict solution would be for the `ImmutableSplitController` to store both the old split parameters and the new parameters. The old parameters could first be used to call `distributeETH()` on the split, and then `updateSplit()` could be called with the new parameters.
+
+If storing both sets of values seems too complex, the alternative would be to require that `split.balance <= 1` to update the split. Then the Obol team could simply store the old parameters off chain to call `distributeETH()` on each split to "unlock" it to update the fees.
+
+(Note that for the second solution, the ETH balance should be less than or equal to 1, not 0, because 0xSplits stores empty balances as `1` for gas savings.)
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[L-02] If OWR is used with rebase tokens and there's a negative rebase, principal can be lost
+
+The `OptimisticWithdrawalRecipient` is deployed with a specific token immutably set on the clone. It is presumed that that token will usually be ETH, but it can also be an ERC20 to account for future integrations with tokenized versions of ETH.
+
+In the event that one of these integrations used a rebasing version of ETH (like `stETH`), the architecture would need to be set up as follows:
+
+`OptimisticWithdrawalRecipient => rewards to something like LidoSplit.sol => Split Wallet`
+
+In this case, the OWR would need to be able to handle rebasing tokens.
+
+In the event that rebasing tokens are used, there is the risk that slashing or inactivity leads to a period with a negative rebase. In this case, the following chain of events could happen:
+
+* `distribute(PULL)` is called, setting `fundsPendingWithdrawal == balance`
+* rebasing causes the balance to decrease slightly
+* `distribute(PULL)` is called again, so when `fundsToBeDistributed = balance - fundsPendingWithdrawal` is calculated in an unchecked block, it ends up being near `type(uint256).max`
+* since this is more than `16 ether`, the first `amountOfPrincipalStake - _claimedPrincipalFunds` will be allocated to the principal recipient, and the rest to the reward recipient
+* we check that `endingDistributedFunds <= type(uint128).max`, but unfortunately this check misses the issue, because only `fundsToBeDistributed` underflows, not `endingDistributedFunds`
+* `_claimedPrincipalFunds` is set to `amountOfPrincipalStake`, so all future claims will go to the reward recipient
+* the `pullBalances` for both recipients will be set higher than the balance of the contract, and so will be unusable
+
+In this situation, the only way for the principal to get their funds back would be for the full `amountOfPrincipalStake` to hit the contract at once, and for them to call `withdraw()` before anyone called `distribute(PUSH)`. If anyone was to be able to call `distribute(PUSH)` before them, all principal would be sent to the reward recipient instead.
+
+#### Recommendation
+
+Similar to #74, I would recommend removing the ability for the `OptimisticWithdrawalRecipient` to accept non-ETH tokens.
+
+Otherwise, I would recommend two changes for redundant safety:
+
+1. Do not allow the OWR to be used with rebasing tokens.
+2. Move the `_fundsToBeDistributed = _endingDistributedFunds - _startingDistributedFunds;` out of the unchecked block. The case where `_endingDistributedFunds` underflows is already handled by a later check, so this one change should be sufficient to prevent any risk of this issue.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[L-03] LidoSplit can receive ETH, which will be locked in contract
+
+Each new `LidoSplit` is deployed as a clone, which comes with a `receive()` function for receiving ETH.
+
+However, the only function on `LidoSplit` is `distribute()`, which converts `stETH` to `wstETH` and transfers it to the `splitWallet`.
+
+While this contract should only be used for Lido to pay out rewards (which will come in `stETH`), it seems possible that users may accidentally use the same contract to receive other validator rewards (in ETH), or that Lido governance may introduce ETH payments in the future, which would cause the funds to be locked.
+
+#### Proof of Concept
+
+The following test can be dropped into `LidoSplit.t.sol` to confirm that the clones can currently receive ETH:
+
+```solidity
+function testZach_CanReceiveEth() public {
+ uint before = address(lidoSplit).balance;
+ payable(address(lidoSplit)).transfer(1 ether);
+ assertEq(address(lidoSplit).balance, before + 1 ether);
+}
+```
+
+#### Recommendation
+
+Introduce an additional function to `LidoSplit.sol` which wraps ETH into stETH before calling `distribute()`, in order to rescue any ETH accidentally sent to the contract.
+
+#### Review
+
+Fixed in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87/files) by adding a `rescueFunds()` function that can send ETH or any ERC20 (except `stETH` or `wstETH`) to the `splitWallet`.
+
+### \[L-04] Upgrade to latest version of Solady to fix LibClone bug
+
+In the recent [Solady audit](https://github.com/Vectorized/solady/blob/main/audits/cantina-solady-report.pdf), an issue was found the affects LibClone.
+
+In short, LibClone assumes that the length of the immutable arguments on the clone will fit in 2 bytes. If it's larger, it overlaps other op codes and can lead to strange behaviors, including causing the deployment to fail or causing the deployment to succeed with no resulting bytecode.
+
+Because the `ImmutableSplitControllerFactory` allows the user to input arrays of any length that will be encoded as immutable arguments on the Clone, we can manipulate the length to accomplish these goals.
+
+Fortunately, failed deployments or empty bytecode (which causes a revert when `init()` is called) are not problems in this case, as the transactions will fail, and it can only happen with unrealistically long arrays that would only be used by malicious users.
+
+However, it is difficult to be sure how else this risk might be exploited by using the overflow to jump to later op codes, and it is recommended to update to a newer version of Solady where the issue has been resolved.
+
+#### Proof of Concept
+
+If we comment out the `init()` call in the `createController()` call, we can see that the following test "successfully" deploys the controller, but the result is that there is no bytecode:
+
+```solidity
+function testZach__CreateControllerSoladyBug() public {
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](28672);
+ uint32[] memory bigPercentAllocations = new uint32[](28672);
+
+ for (uint i = 0; i < 28672; i++) {
+ bigAccounts[i] = address(uint160(i));
+ if (i < 32) bigPercentAllocations[i] = 820;
+ else bigPercentAllocations[i] = 34;
+ }
+
+ ImmutableSplitController controller = factory.createController(address(8888), owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+ assert(address(controller) != address(0));
+ assert(address(controller).code.length == 0);
+}
+```
+
+#### Recommendation
+
+Delete Solady and clone it from the most recent commit, or any commit after the fixes from [PR #548](https://github.com/Vectorized/solady/pull/548/files#diff-27a3ba4730de4b778ecba4697ab7dfb9b4f30f9e3666d1e5665b194fe6c9ae45) were merged.
+
+#### Review
+
+Solady has been updated to v.0.0.123 in [PR 88](https://github.com/ObolNetwork/obol-manager-contracts/pull/88).
+
+### \[G-01] stETH and wstETH addresses can be saved on implementation to save gas
+
+The `LidoSplitFactory` contract holds two immutable values for the addresses of the `stETH` and `wstETH` tokens.
+
+When new clones are deployed, these values are encoded as immutable args. This adds the values to the contract code of the clone, so that each time a call is made, they are passed as calldata along to the implementation, which reads the values from the calldata for use.
+
+Since these values will be consistent across all clones on the same chain, it would be more gas efficient to store them in the implementation directly, which can be done with `immutable` storage values, set in the constructor.
+
+This would save 40 bytes of calldata on each call to the clone, which leads to a savings of approximately 640 gas on each call.
+
+#### Recommendation
+
+1. Add the following to `LidoSplit.sol`:
+
+```solidity
+address immutable public stETH;
+address immutable public wstETH;
+```
+
+2. Add a constructor to `LidoSplit.sol` which sets these immutable values. Solidity treats immutable values as constants and stores them directly in the contract bytecode, so they will be accessible from the clones.
+3. Remove `stETH` and `wstETH` from `LidoSplitFactory.sol`, both as storage values, arguments to the constructor, and arguments to `clone()`.
+4. Adjust the `distribute()` function in `LidoSplit.sol` to read the storage values for these two addresses, and remove the helper functions to read the clone's immutable arguments for these two values.
+
+#### Review
+
+Fixed as recommended in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87).
+
+### \[G-02] OWR can be simplified and save gas by not tracking distributedFunds
+
+Currently, the `OptimisticWithdrawalRecipient` contract tracks four variables:
+
+* distributedFunds: total amount of the token distributed via push or pull
+* fundsPendingWithdrawal: total balance distributed via pull that haven't been claimed yet
+* claimedPrincipalFunds: total amount of funds claimed by the principal recipient
+* pullBalances: individual pull balances that haven't been claimed yet
+
+When `_distributeFunds()` is called, we perform the following math (simplified to only include relevant updates):
+
+```solidity
+endingDistributedFunds = distributedFunds - fundsPendingWithdrawal + currentBalance;
+fundsToBeDistributed = endingDistributedFunds - distributedFunds;
+distributedFunds = endingDistributedFunds;
+```
+
+As we can see, `distributedFunds` is added to the `endingDistributedFunds` variable and then removed when calculating `fundsToBeDistributed`, having no impact on the resulting `fundsToBeDistributed` value.
+
+The `distributedFunds` variable is not read or used anywhere else on the contract.
+
+#### Recommendation
+
+We can simplify the math and save substantial gas (a storage write plus additional operations) by not tracking this value at all.
+
+This would allow us to calculate `fundsToBeDistributed` directly, as follows:
+
+```solidity
+fundsToBeDistributed = currentBalance - fundsPendingWithdrawal;
+```
+
+#### Review
+
+Fixed as recommended in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85).
+
+### \[I-01] Strong trust assumptions between validators and node operators
+
+It is assumed that validators and node operators will always act in the best interest of the group, rather than in their selfish best interest.
+
+It is important to make clear to users that there are strong trust assumptions between the various parties involved in the DVT.
+
+Here are a select few examples of attacks that a malicious set of node operators could perform:
+
+1. Since there is currently no mechanism for withdrawals besides the consensus of the node operators, a minority of them sufficient to withhold consensus could blackmail the principal for a payment of up to 16 ether in order to allow them to withdraw. Otherwise, they could turn off their node operators and force the principal to bleed down to a final withdrawn balance of just over 16 ether.
+2. Node operators are all able to propose blocks within the P2P network, which are then propogated out to the rest of the network. Node software is accustomed to signing for blocks built by block builders based on the metadata including quantity of fees and the address they'll be sent to. This is enforced by social consensus, with block builders not wanting to harm validators in order to have their blocks accepted in the future. However, node operators in a DVT are not concerned with the social consensus of the network, and could therefore build blocks that include large MEV payments to their personal address (instead of the DVT's 0xSplit), add fictious metadata to the block header, have their fellow node operators accept the block, and take the MEV for themselves.
+3. While the withdrawal address is immutably set on the beacon chain to the OWR, the fee address is added by the nodes to each block. Any majority of node operators sufficient to reach consensus could create a new 0xSplit with only themselves on it, and use that for all execution layer fees. The principal (and other node operators) would not be able to stop them or withdraw their principal, and would be stuck with staked funds paying fees to the malicious node operators.
+
+Note that there are likely many other possible attacks that malicious node operators could perform. This report is intended to demonstrate some examples of the trust level that is needed between validators and node operators, and to emphasize the importance of making these assumptions clear to users.
+
+#### Review
+
+Acknowledged. We believe EIP 7002 will reduce this trust assumption as it would enable the validator exit via the execution layer withdrawal key.
+
+### \[I-02] Provide node operator checklist to validate setup
+
+There are a number of ways that the user setting up the DVT could plant backdoors to harm the other users involved in the DVT.
+
+Each of these risks is possible to check before signing off on the setup, but some are rather hidden, so it would be useful for the protocol to provide a list of checks that node operators should do before signing off on the setup parameters (or, even better, provide these checks for them through the front end).
+
+1. Confirm that `SplitsMain.getHash(split)` matches the hash of the parameters that the user is expecting to be used.
+2. Confirm that the controller clone delegates to the correct implementation. If not, it could be pointed to delegate to `SplitMain` and then called to `transferControl()` to a user's own address, allowing them to update the split arbitrarily.
+3. `OptimisticWithdrawalRecipient.getTranches()` should be called to check that `amountOfPrincipalStake` is equal to the amount that they will actually be providing.
+4. The controller's `owner` and future split including Obol fees should be provided to the user. They should be able to check that `ImmutableSplitControllerFactory.predictSplitControllerAddress()`, with those parameters inputted, results in the controller that is actually listed on `SplitsMain.getController(split)`.
+
+#### Review
+
+Acknowledged. We do some of these already (will add the remainder) automatically in the launchpad UI during the cluster confirmation phase by the node operator. We will also add it in markdown to the repo.
diff --git a/docs/versioned_docs/version-v0.17.1/sec/threat_model.md b/docs/versioned_docs/version-v0.17.1/sec/threat_model.md
new file mode 100644
index 0000000000..fbca3c7ce8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/sec/threat_model.md
@@ -0,0 +1,155 @@
+---
+sidebar_position: 6
+description: Threat model for a Distributed Validator
+---
+
+# Charon threat model
+
+This page outlines a threat model for Charon, in the context of it being a Distributed Validator middleware for Ethereum validator clients.
+
+## Actors
+
+- Node owner (NO)
+- Cluster node operators (CNO)
+- Rogue node operator (RNO)
+- Outside attacker (OA)
+
+## General observations
+
+This page describes some considerations the Obol core team made about the security of a distributed validator in the context of its deployment and interaction with outside actors.
+
+The goal of this threat model is to provide transparency, but it is by no means a comprehensive audit or complete security reference. It’s a sharing of the experiences and thoughts we gained during the last few years building distributed validator technologies.
+
+While to the Beacon Chain, a distributed validator is seen in much the same way as a regular validator, and thus retains some of the same security considerations, Charon’s threat model is different from a validator client’s threat model because of its general design.
+
+While a validator client owns and operates on a set of validator private keys, the design of Charon allows its node operators to rarely (if ever) see the complete validator private keys, relying instead on modern cryptography to generate partial private key shares.
+
+An Ethereum distributed validator employs advanced signature primitives such that no operator ever handles the full validator private key in any standard lifecycle step: the [BLS digital signature scheme](https://en.wikipedia.org/wiki/BLS_digital_signature) employed by the Ethereum network allows distributed validators to individually sign a blob of data and then aggregate the resulting signatures in a transparent manner, never requiring any of the participating parties to know the full private key to do so.
+
+If the subset of the available Charon nodes is lower than a given threshold, the cluster is not able to continue with its duties.
+
+Given the collaborative nature of a Distributed Validator cluster, every operator must prioritize the liveliness and well-being of the cluster. Charon, at the moment of writing this page cannot reward and penalize operators within a cluster independently.
+
+This implies that Charon’s threat model can’t quite be equated to that of a single validator client, since they work on a different - albeit similar - set of security concepts.
+
+## Identity private key
+
+A distributed validator cluster is made up of a number of nodes, often run by a number of independent operators. For each DV cluster there’s a set of Ethereum validator private keys on which they want to validate on behalf of.
+
+Alongside those, each node (henceforth ‘operator’) holds an SECP256K1 identity private key, referred to as an ENR, that identifies their node to the other cluster operators’ nodes.
+
+Exfiltration of said private key could lead to possible impersonation from an outside attacker, possibly leading to intra-cluster peering issues, eclipse attack risks, and degraded validator performance.
+
+Charon client communication is handled via BFT consensus, which is able to tolerate a given number of misbehaving nodes up to a certain threshold: once this threshold is reached, the cluster is not able to continue with its lifecycle and loses liveness guarantees (the validator goes offline). If more than two-thirds of nodes in a cluster are malicious, a cluster also loses safety guarantees (enough bad actors could collude to come to consensus on something slashable).
+
+Identity private key theft and the subsequent execution of a rogue cluster node is equivalent in the context of BFT consensus to a misbehaving node, hence the cluster can survive and continue with its duties up to what’s specified by the cluster’s BFT protocol’s parameters.
+
+The likelihood of this happening is low: an OA with enough knowledge of the topology of the operator’s network must steal `fault tolerance of the cluster + 1` identity private keys and run Charon nodes to subvert the distributed validator BFT consensus to push the validator offline.
+
+## Ethereum validator private key access
+
+A distributed validator cluster executes Ethereum validator duties by acting as a middleman between the beacon chain and a validator client.
+
+To do so, the cluster must have knowledge of the Ethereum validator’s private key.
+
+The design and implementation of Charon minimizes the chances of this by splitting the Ethereum validator private keys into parts, which are then assigned to each node operator.
+A [distributed key generation](https://en.wikipedia.org/wiki/Distributed_key_generation) (DKG) process is used in order to evenly and safely create the private key shares without any central party having access to the full private key.
+
+The cryptography primitives employed in Charon can allow a threshold of the node operator’s private key shares to be reconstructed into the whole validator private key if needed.
+
+While the facilities to do this are present in the form of CLI commands, as stated before Charon never reconstructs the key in normal operations since BLS digital signature system allows for signature aggregation.
+
+A distributed validator cluster can be started in two ways:
+
+1. An existing Ethereum validator private key is split by the private key holder, and distributed in a trusted manner among the operators.
+2. The operators participate in a distributed key generation (DKG) process, to create private key shares that collectively can be used to sign validation duties as an Ethereum distributed validator. The full private key for the cluster never exists in one place during or after the DKG.
+
+In case 1, one of the node operators K has direct access to the Ethereum validator key and is tasked with the generation of other operator’s identity keys and key shards.
+
+It is clear that in this case the entirety of the sensitive material set is as secure as K’s environment; if K is compromised or malicious, the distributed validator could be slashed.
+
+Case 2 is different, because there’s no pre-existing Ethereum validator key in a single operator's hands: it will be generated using the FROST DKG algorithm.
+
+Assuming a successful DKG process, each operator will only ever handle its own key shares instead of the full Ethereum validator private key.
+
+A set of rogue operators composed of enough members to reconstruct the original Ethereum private keys might pose the risk of slashing for a distributed validator by colluding to produce slashable messages together.
+
+We deem this scenario’s likelihood as low, as it would mean that node operators decided to willfully slash the stake that they should be being rewarded for staking.
+
+Still, in the context of an outside attack, purposefully slashing a validator would mean stealing multiple operator key shares, which in turn means violating many cluster operator’s security almost at the same time. This scenario may occur if there is a 0-day vulnerability in a piece of software they all run or in case of node misconfiguration.
+
+## Rogue node operator
+
+Nodes are connected by means of either relay nodes, or directly to one another.
+
+Each node operator is at risk of being impeded by other nodes or by the relay operator in the execution of their duties.
+
+Nodes need to expose a set of TCP ports to be able to work, and the mere fact of doing that opens up the opportunity for rogue parties to execute DDoS attacks.
+
+Another attack surface for the cluster exists in rogue nodes purposefully filling the various inter-state databases with meaningless data, or more generally submitting bogus information to the other parties to slow down the processing or, in the case of a sybil attack, bring the cluster to a halt.
+
+The likelihood of this scenario is medium, because there’s no active threat hunting part: there’s no need for the rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle.
+
+## Outside attackers interfering with a cluster
+
+There are two levels of sophistication in an OA:
+
+1. No knowledge of the topology of the cluster: The attacker doesn’t know where each cluster node is located and so can’t force fault tolerance +1 nodes offline if it can’t find them.
+2. Knowledge of the topology of the network (or part of it) is possessed: the OA can operate DDoS attacks or try breaking into node’s servers - at that point, the “rogue node operator” scenario applies.
+
+The likelihood of this scenario is low: an OA needs extensive capabilities and sufficient incentive to be able to carry out an attack of this size.
+
+An outside attacker could also find and use vulnerabilities in the underlying cryptosystems and cryptography libraries used by Charon and other Ethereum clients. Forging signatures that fool Charon’s cryptographic library or other dependencies may be feasible, but forging signatures or otherwise finding a vulnerability in either the SECP256K1+ECDSA or BLS12-381+BLS cryptosystems we deem to be a low likelihood risk.
+
+## Malicious beacon nodes
+
+A malicious beacon node (BN) could prevent the distributed validator from operating its validation duties, and could plausibly increase the likelihood of slashing by serving charon illegitimate information.
+
+If the amount of nodes configured with the malicious BN are equal to the byzantine threshold for the Charon BFT consensus protocol, the validation process can potentially halt since the BFT parameter threshold is reached - most of the nodes are byzantine - the system will reach consensus on a set of data that isn’t valid.
+
+We deem the likelihood of this scenario to be medium depending on the trust model associated with the BNs deployment (cloud, self-hosted, SaaS product): node operators should always host or at least trust their own beacon nodes.
+
+## Malicious charon relays
+
+A Charon relay is used as a communication bridge between nodes that aren’t directly exposed on the Internet. It also acts as the peer discovery mechanism for a cluster.
+
+Once a peer’s IP address has been discovered via the relay, a direct connection can be attempted. Nodes can either communicate by exchanging data through a relay, or by using the relay as a means to establish a direct TCP connection to one another.
+
+A malicious relay owned by a OA could lead to:
+
+- Network topology discovery, facilitating the “outside attackers interactions with a cluster” scenario
+- Impeding node communication, potentially impacting the BFT consensus protocol liveness (not security) and distributed validator duties
+- DKG process disruption leading to frustration and potential abandonment by node operators: could lead to the usage of a standard Ethereum validator setup, which implies weaker security overall
+
+We note that BFT consensus liveness disruption can only happen if the number of nodes using the malicious relay for communication is equal to the byzantine nodes amount defined in the consensus parameters.
+
+This risk can be mitigated by configuring nodes with multiple relay URLs from only [trusted entities](../int/quickstart/advanced/self-relay.md).
+
+The likelihood of this scenario is medium: Charon nodes are configured with a default set of relay nodes, so if an OA were to compromise those, it would lead to many cluster topologies getting discovered and potentially attacked and disrupted.
+
+## Compromised runtime files
+
+Charon operates with two runtime files:
+
+- A lock file used to address operator’s nodes, define the Ethereum validator public keys and the public key shares associated with it
+- A cluster definition file used to define the operator’s addresses and identities during the DKG process
+
+The lock file is signed and validated by all the nodes participating in the cluster: assuming good security practices on the node operator side, and no bugs in Charon or its dependencies’ implementations, this scenario is unlikely.
+
+If one or more node operators are using less than ideal security practices an OA could rewire the Charon CLI flags to include the `--no-verify` flags, which disables lock file signature and hash verification (usually intended only for development purposes).
+
+By doing that, the OA can edit the lock file as it sees fit, leading to the “rogue node operator” scenario. An OA or RNO might also manage to social engineer their way into convincing other operators into running their malicious lock file with verification disabled.
+
+The likelihood of this scenario is low: an OA would need to compromise every node operator through social engineering to both use a different set of files, and to run its cluster with `--no-verify`.
+
+## Conclusions
+
+Distributed Validator Technology (DVT) helps maintain a high-assurance environment for Ethereum validators by leveraging modern cryptography to ensure no single point of failure is easily found in the system.
+
+As with any computing system, security considerations are to be expected in order to keep the environment safe.
+
+From the point of view of an Ethereum validator entity, running their services with a DV client can help greatly with availability, minimizing slashing risks, and maximizing participation in the network.
+
+On the other hand, one must take into consideration the risks involved with dishonest cluster operators, as well as rogue third-party beacon nodes or relay providers.
+
+In the end, we believe the benefits of DVT greatly outweigh the potential threats described in this overview.
diff --git a/docs/versioned_docs/version-v0.17.1/testnet.md b/docs/versioned_docs/version-v0.17.1/testnet.md
new file mode 100644
index 0000000000..f430d00a0b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.17.1/testnet.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 6
+description: Obol testnets roadmap
+---
+
+# Testnets
+
+Over the coming quarters, Obol Labs has and will continue to coordinate and host a number of progressively larger testnets to help harden the Charon client and iterate on the key generation tooling.
+
+The following is a breakdown of the intended testnet roadmap, the features that are to be completed by each testnet, and their target start date and duration.
+
+# Testnets
+
+- [x] [Dev Net 1](#devnet-1)
+- [x] [Dev Net 2](#devnet-2)
+- [x] [Athena Public Testnet 1](#athena-public-testnet-1)
+- [x] [Bia Public Testnet 2](#bia-public-testnet-2)
+
+## Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using `docker compose` to spin up 4 Charon clients and 4 different validator clients on a single machine, with a remote consensus client. The keys were created locally in Charon and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+- A single user completes a first tutorial alone, using `docker compose` to spin up 4 Charon clients on a single machine, with a remote consensus client. The keys are created locally in Charon and activated with the existing launchpad.
+- Prove that the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works.
+- Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to Charon running across a network.
+
+## Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows **together** for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+
+This devnet was the first time `charon dkg` was tested with users. A core focus of this devnet was to collect network performance data.
+
+This was also the first time Charon was run in variable, non-virtual networks (i.e. the real internet).
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+- Groups of 4 testers complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients, each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+- Operators avoid exposing Charon to the public internet on a static IP address through the use of Obol-hosted relay nodes.
+- Users test `charon dkg`. The launchpad is not used, and this dkg is triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+- Effective collection of network performance data, to enable gathering even higher signal performance data at scale during public testnets.
+- Block proposals are in place.
+
+## Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 was to get distributed validators into the hands of the wider Obol Community for the first time. The core focus of this testnet was the onboarding experience.
+
+The core output from this testnet was a significant number of public cluster running and public feedback collected.
+
+This was an unincentivized testnet and formed the basis for us to figure out a Sybil resistance mechanism.
+
+**Participants:** Obol Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Completed date:** October 2022
+
+**Duration:** 2 weeks cluster setup, 8 weeks operation
+
+**Goals:**
+
+- Get distributed validators into the hands of the Obol Early Community for the first time.
+- Create the first public onboarding experience and gather feedback. This is the first time we need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) as possible.
+- Make deploying Ethereum validator nodes accessible using the CLI.
+- Generate a backlog of bugs, feature requests, platform requests and integration requests.
+
+## Bia Public Testnet 2
+
+This second public testnet intends to take the learning from Athena and scale the network by engaging both the wider at-home validator community and professional operators. This is the first time users are setting up DVs using the DV launchpad.
+
+This testnet is also important for learning the conditions Charon will be subjected to in production. A core output of this testnet is a large number of autonomous public DV clusters running and building up the Obol community with technical ambassadors.
+
+**Participants:** Obol Community, Ethereum staking community
+
+**State:** MVP
+
+**Network:** Görli
+
+**Target Completed date:** March 2023
+
+**Duration:** 2 weeks cluster setup, 4-8 weeks operation
+
+**Goals:**
+
+- Engage the wider Solo and Professional Ethereum Staking Community.
+- Get integration feedback.
+- Build confidence in Charon after running DVs on an Ethereum testnet.
+- Learn about the conditions Charon will be subjected to in production.
+- Distributed Validator returns are competitive versus single validator clients.
+- Make deploying Ethereum validator nodes accessible using the DV Launchpad.
+- Build comprehensive guides for various profiles to spin up DVs with minimal supervision from the core team.
diff --git a/docs/versioned_docs/version-v0.18.0/README.md b/docs/versioned_docs/version-v0.18.0/README.md
new file mode 100644
index 0000000000..09754d3390
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.18.0
+
diff --git a/docs/versioned_docs/version-v0.18.0/cg/README.md b/docs/versioned_docs/version-v0.18.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.18.0/cg/bug-report.md b/docs/versioned_docs/version-v0.18.0/cg/bug-report.md
new file mode 100644
index 0000000000..9a10b3b553
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hindrance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualize the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behavior
+
+
+## Current Behavior
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.18.0/cg/feedback.md b/docs/versioned_docs/version-v0.18.0/cg/feedback.md
new file mode 100644
index 0000000000..76042e28aa
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/cg/feedback.md
@@ -0,0 +1,5 @@
+# Feedback
+
+If you have followed our quickstart guides, and whether you succeeded or failed at running the distributed validator successfully, we would like to hear your feedback on the process and where you encountered difficulties.
+- Please let us know by joining and posting on our [Discord](https://discord.gg/n6ebKsX46w).
+- Also, feel free to add issues to our [GitHub repos](https://github.com/ObolNetwork).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.18.0/charon/README.md b/docs/versioned_docs/version-v0.18.0/charon/README.md
new file mode 100644
index 0000000000..44b46f1797
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/charon/README.md
@@ -0,0 +1,2 @@
+# charon
+
diff --git a/docs/versioned_docs/version-v0.18.0/charon/charon-cli-reference.md b/docs/versioned_docs/version-v0.18.0/charon/charon-cli-reference.md
new file mode 100644
index 0000000000..a6850c3168
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/charon/charon-cli-reference.md
@@ -0,0 +1,382 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+sidebar_position: 5
+---
+
+# CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.18.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.18.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+The following are the top-level commands available to use.
+
+```markdown
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ alpha Alpha subcommands provide early access to in-development features
+ combine Combines the private key shares of a distributed validator cluster into a set of standard validator private keys.
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Prints a new ENR for this node
+ help Help about any command
+ relay Start a libp2p relay server
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+## The `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+```
+
+### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+```
+
+### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster-lock.json` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --cluster-dir string The target folder to create the cluster in. (default "./")
+ --definition-file string Optional path to a cluster definition file or an HTTP URL. This overrides all other configuration flags.
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for cluster
+ --insecure-keys Generates insecure keystore files. This should never be used. It is not supported on mainnet.
+ --keymanager-addresses strings Comma separated list of keymanager URLs to import validator key shares to. Note that multiple addresses are required, one for each node in the cluster, with node0's keyshares being imported to the first address, node1's keyshares to the second, and so on.
+ --keymanager-auth-tokens strings Authentication bearer tokens to interact with the keymanager URLs. Don't include the "Bearer" symbol, only include the api-token.
+ --name string The cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky.
+ --nodes int The number of charon nodes in the cluster. Minimum is 3.
+ --num-validators int The number of distributed validators needed in the cluster.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, frost (default "default")
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky. (default "mainnet")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings [REQUIRED] Comma-separated list of each operator's Charon ENR address.
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+## The `dkg` subcommand
+
+### Performing a DKG Ceremony
+
+The `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit data for each new distributed validator. The command outputs the `cluster-lock.json` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file or an HTTP URL. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --keymanager-address string The keymanager URL to import validator keyshares.
+ --keymanager-auth-token string Authentication bearer token to interact with keymanager API. Don't include the "Bearer" symbol, only include the api-token.
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --shutdown-delay duration Graceful shutdown delay. (default 1s)
+```
+
+## The `run` subcommand
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster-lock.json` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs.
+ --builder-api Enables the builder api. Will only produce builder blocks. Builder API must also be enabled on the validator client. Beacon node must be connected to a builder-relay to access the builder network.
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing.
+ --jaeger-service string Service name used for jaeger tracing. (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --manifest-file string The path to the cluster manifest file. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-manifest.pb")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof). (default "127.0.0.1:3620")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --private-key-file-lock Enables private key locking to prevent multiple instances using the same key.
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-beacon-mock-fuzz Configures simnet beaconmock to return fuzzed responses.
+ --simnet-slot-duration duration Configures slot duration in simnet beacon mock. (default 1s)
+ --simnet-validator-keys-dir string The directory containing the simnet validator key shares. (default ".charon/validator_keys")
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --synthetic-block-proposals Enables additional synthetic block proposal duties. Used for testing of rare duties.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API. (default "127.0.0.1:3600")
+```
+
+## The `combine` subcommand
+
+### Combine distributed validator keyshares into a single Validator key
+
+The `combine` command combines many validator keyshares into a single Ethereum validator key.
+
+To run this command, one needs all the node operator's `.charon` directories, which need to be organized in the following way:
+
+```shell
+validators-to-be-combined/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+That is, each operator '.charon' directory must be placed in a parent directory, and renamed.
+
+Note that all validator keys are required for the successful execution of this command.
+
+If for example the lock file defines 2 validators, each `validator_keys` directory must contain exactly 4 files, a JSON and TXT file for each validator.
+
+Those files must be named with an increasing index associated with the validator in the lock file, starting from 0.
+
+The chosen name doesn't matter, as long as it's different from `.charon`.
+
+At the end of the process `combine` will create a new set of directories containing one validator key each, named after its public key:
+
+```shell
+validators-to-be-combined/
+├── 0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd # contains private key
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── 0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106 # contains private key
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+By default, the `combine` command will refuse to overwrite any private key that is already present in the destination directory.
+
+To force the process, use the `--force` flag.
+
+```markdown
+charon combine --help
+Combines the private key shares from a threshold of operators in a distributed validator cluster into a set of validator private keys that can be imported into a standard Ethereum validator client.
+
+Warning: running the resulting private keys in a validator alongside the original distributed validator cluster *will* result in slashing.
+
+Usage:
+ charon combine [flags]
+
+Flags:
+ --cluster-dir string Parent directory containing a number of .charon subdirectories from the required threshold of nodes in the cluster. (default ".charon/cluster")
+ --force Overwrites private keys with the same name if present.
+ -h, --help Help for combine
+ --no-verify Disables cluster definition and lock file verification.
+ --output-dir string Directory to output the combined private keys to. (default "./validator_keys")
+```
+
+## Host a relay
+
+Relays run a libp2p [circuit relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) server that allows charon clusters to perform peer discovery and for charon clients behind NAT gateways to be communicated with. If you want to self-host a relay for your cluster(s) the following command will start one.
+
+```markdown
+charon relay --help
+Starts a libp2p relay that charon nodes can use to bootstrap their p2p cluster
+
+Usage:
+ charon relay [flags]
+
+Flags:
+ --auto-p2pkey Automatically create a p2pkey (secp256k1 private key used for p2p authentication and ENR) if none found in data directory. (default true)
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for relay
+ --http-address string Listening address (ip and port) for the relay http server serving runtime ENR. (default "127.0.0.1:3640")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --monitoring-address string Listening address (ip and port) for the prometheus and pprof monitoring http server. (default "127.0.0.1:3620")
+ --p2p-advertise-private-addresses Enable advertising of libp2p auto-detected private addresses. This doesn't affect manually provided p2p-external-ip/hostname.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-max-connections int Libp2p maximum number of peers that can connect to this relay. (default 16384)
+ --p2p-max-reservations int Updates max circuit reservations per peer (each valid for 30min) (default 512)
+ --p2p-relay-loglevel string Libp2p circuit relay log level. E.g., debug, info, warn, error.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+```
diff --git a/docs/versioned_docs/version-v0.18.0/charon/cluster-configuration.md b/docs/versioned_docs/version-v0.18.0/charon/cluster-configuration.md
new file mode 100644
index 0000000000..d05f53dc3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/charon/cluster-configuration.md
@@ -0,0 +1,161 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+sidebar_position: 3
+---
+
+# Cluster configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client or cluster.
+
+A charon cluster is configured in two steps:
+
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+In the case of a solo operator running a cluster, the [`charon create cluster`](./charon-cli-reference.md#create-a-full-cluster-locally) command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+## Cluster Definition File
+
+The `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+### Using the CLI
+
+The [`charon create dkg`](./charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) command is used to create the `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The schema of the `cluster-definition.json` is defined as:
+
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "creator": {
+ "address": "0x123..abfc", //ETH1 address of the creator
+ "config_signature": "0x123654...abcedf" // EIP712 Signature of config_hash using creator privkey
+ },
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "config_signature": "0x123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "0x123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.2.0", // Schema version
+ "timestamp": "2022-01-01T12:00:00+00:00", // Creation timestamp
+ "num_validators": 2, // Number of distributed validators to be created in cluster-lock.json
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "validators": [
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ },
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ }
+ ],
+ "dkg_algorithm": "foo_dkg_v1", // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "0xabcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "0xabcdef...abcedef" // Final hash of all fields
+}
+```
+
+### Using the DV Launchpad
+
+- A [`leader/creator`](../int/quickstart/group/index.md), that wishes to coordinate the creation of a new Distributed Validator Cluster navigates to the launchpad and selects "Create new Cluster"
+- The `leader/creator` uses the user interface to configure all of the important details about the cluster including:
+ - The `Withdrawal Address` for the created validators
+ - The `Fee Recipient Address` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialized and merklized to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that there is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the `leader/creator` is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralized backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralization of the launchpad.)
+
+## Cluster Lock File
+
+The `cluster-lock.json` has the following schema:
+
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equal to cluster_definition.num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "abc...fed", "cfd...bfe"], // Length equal to cluster_definition.operators
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
+
+## Cluster Size and Resilience
+
+The cluster size (the number of nodes/operators in the cluster) determines the resilience of the cluster; its ability remain operational under diverse failure scenarios.
+Larger clusters can tolerate more faulty nodes.
+However, increased cluster size implies higher operational costs and potential network latency, which may negatively affect performance
+
+Optimal cluster size is therefore trade-off between resilience (larger is better) vs cost-efficiency and performance (smaller is better).
+
+Cluster resilience can be broadly classified into two categories:
+ - **[Byzantine Fault Tolerance (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault)** - the ability to tolerate nodes that are actively trying to disrupt the cluster.
+ - **[Crash Fault Tolerance (CFT)](https://en.wikipedia.org/wiki/Fault_tolerance)** - the ability to tolerate nodes that have crashed or are otherwise unavailable.
+
+Different cluster sizes tolerate different counts of byzantine vs crash nodes.
+In practice, hardware and software crash relatively frequently, while byzantine behaviour is relatively uncommon.
+However, Byzantine Fault Tolerance is crucial for trust minimised systems like distributed validators.
+Thus, cluster size can be chosen to optimise for either BFT or CFT.
+
+The table below lists different cluster sizes and their characteristics:
+ - `Cluster Size` - the number of nodes in the cluster.
+ - `Threshold` - the minimum number of nodes that must collaborate to reach consensus quorum and to create signatures.
+ - `BFT #` - the maximum number of byzantine nodes that can be tolerated.
+ - `CFT #` - the maximum number of crashed nodes that can be tolerated.
+
+| Cluster Size | Threshold | BFT # | CFT # | Note |
+|--------------|-----------|-------|-------|------------------------------------|
+| 1 | 1 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 2 | 2 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 3 | 2 | 0 | 1 | ⚠️ Warning: CFT but not BFT! |
+| 4 | 3 | 1 | 1 | ✅ CFT and BFT optimal for 1 faulty |
+| 5 | 4 | 1 | 1 | |
+| 6 | 4 | 1 | 2 | ✅ CFT optimal for 2 crashed |
+| 7 | 5 | 2 | 2 | ✅ BFT optimal for 2 byzantine |
+| 8 | 6 | 2 | 2 | |
+| 9 | 6 | 2 | 3 | ✅ CFT optimal for 3 crashed |
+| 10 | 7 | 3 | 3 | ✅ BFT optimal for 3 byzantine |
+| 11 | 8 | 3 | 3 | |
+| 12 | 8 | 3 | 4 | ✅ CFT optimal for 4 crashed |
+| 13 | 9 | 4 | 4 | ✅ BFT optimal for 4 byzantine |
+| 14 | 10 | 4 | 4 | |
+| 15 | 10 | 4 | 5 | ✅ CFT optimal for 5 crashed |
+| 16 | 11 | 5 | 5 | ✅ BFT optimal for 5 byzantine |
+| 17 | 12 | 5 | 5 | |
+| 18 | 12 | 5 | 6 | ✅ CFT optimal for 6 crashed |
+| 19 | 13 | 6 | 6 | ✅ BFT optimal for 6 byzantine |
+| 20 | 14 | 6 | 6 | |
+| 21 | 14 | 6 | 7 | ✅ CFT optimal for 7 crashed |
+| 22 | 15 | 7 | 7 | ✅ BFT optimal for 7 byzantine |
+
+The table above is determined by the QBFT consensus algorithm with the
+following formulas from [this](https://arxiv.org/pdf/1909.10194.pdf) paper:
+
+```
+n = cluster size
+
+Threshold: min number of honest nodes required to reach quorum given size n
+Quorum(n) = ceiling(2n/3)
+
+BFT #: max number of faulty (byzantine) nodes given size n
+f(n) = floor((n-1)/3)
+
+CFT #: max number of unavailable (crashed) nodes given size n
+crashed(n) = n - Quorum(n)
+```
diff --git a/docs/versioned_docs/version-v0.18.0/charon/dkg.md b/docs/versioned_docs/version-v0.18.0/charon/dkg.md
new file mode 100644
index 0000000000..4c44582cc4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/charon/dkg.md
@@ -0,0 +1,74 @@
+---
+sidebar_position: 2
+description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key
+ Generation (DKG) Ceremony.
+---
+
+# Distributed Key Generation
+
+## Overview
+
+A [**distributed validator key**](../int/key-concepts.md#distributed-validator-key) is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together (4 randomly chosen points on a graph don't all necessarily sit on the same order three curve). To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a [**distributed key generation ceremony**](../int/key-concepts.md#distributed-validator-key-generation-ceremony).
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/charon/cluster-configuration/README.md).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+* An `Operator` is identified by their Ethereum address. They will sign a message with this address to authorize their charon client to take part in the DKG ceremony.
+* A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p/tree/master/p2p/security/noise). These keys need to be created (and backed up) by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This cluster definition specifies the intended cluster configuration before keys have been created in a distributed key generation ceremony. The `cluster-definition.json` file can be created with the help of the [Distributed Validator Launchpad](cluster-configuration.md#using-the-dv-launchpad) or via the [CLI](cluster-configuration.md#using-the-cli).
+
+## Carrying out the DKG ceremony
+
+Once all participants have signed the cluster definition, they can load the `cluster-definition` file into their charon client, and the client will attempt to complete the DKG.
+
+Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to relays that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen by a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+## Backing up the ceremony artifacts
+
+At the end of a DKG ceremony, each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+* **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+* **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+* **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favor of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+* Do the public key shares combine together to form the group public key?
+ * This can be checked on chain as it does not require a pairing operation
+ * This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+* Do the created BLS public keys attest to their `cluster_definition_hash`?
+ * This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ * If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ * As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+* Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ * VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ * PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ * A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ * Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/charon/cluster-configuration/README.md).
diff --git a/docs/versioned_docs/version-v0.18.0/charon/intro.md b/docs/versioned_docs/version-v0.18.0/charon/intro.md
new file mode 100644
index 0000000000..7e4f170f88
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/charon/intro.md
@@ -0,0 +1,69 @@
+---
+sidebar_position: 1
+description: Charon - The Distributed Validator Client
+---
+
+# Introduction
+
+This section introduces and outlines the Charon _\[kharon]_ middleware, Obol's implementation of DVT. Please see the [key concepts](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/key-concepts/README.md) section as background and context.
+
+## What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and its connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+
+
+## Charon Architecture
+
+Charon is an Ethereum proof of stake distributed validator (DV) client. Like any validator client, its main purpose is to perform validation duties for the Beacon Chain, primarily attestations and block proposals. The beacon client handles a lot of the heavy lifting, leaving the validator client to focus on fetching duty data, signing that data, and submitting it back to the beacon client.
+
+Charon is designed as a generic event-driven workflow with different components coordinating to perform validation duties. All duties follow the same flow, the only difference being the signed data. The workflow can be divided into phases consisting of one or more components:
+
+
+
+### Determine **when** duties need to be performed
+
+The beacon chain is divided into [slots](https://eth2book.info/bellatrix/part3/config/types/#slot) and [epochs](https://eth2book.info/bellatrix/part3/config/types/#epoch), which divides it into deterministically fixed-size time chunks. The first step is to determine when (which slot/epoch) duties need to be performed. This is done by the `scheduler` component. It queries the beacon node to detect which validators defined in the cluster lock are active, and what duties they need to perform for the upcoming epoch and slots. When such a slot starts, the `scheduler` emits an event indicating which validator needs to perform what duty.
+
+### Fetch and come to consensus on **what** data to sign
+
+A DV cluster consists of multiple operators each provided with one of the M-of-N threshold BLS private key shares per validator. The key shares are imported into the validator clients which produce partial signatures. Charon threshold aggregates these partial signatures before broadcasting them to the Beacon Chain. _But to threshold aggregate partial signatures, each validator must sign the same data._ The cluster must therefore coordinate and come to a consensus on what data to sign.
+
+`Fetcher` fetches the unsigned duty data from the beacon node upon receiving an event from `Scheduler`.\
+For attestations, this is the unsigned attestation, for block proposals, this is the unsigned block.
+
+The `Consensus` component listens to events from Fetcher and starts a [QBFT](https://docs.goquorum.consensys.net/configure-and-manage/configure/consensus-protocols/qbft/) consensus game with the other Charon nodes in the cluster for that specific duty and slot. When consensus is reached, the resulting unsigned duty data is stored in the `DutyDB`.
+
+### **Wait** for the VC to sign
+
+Charon is a **middleware** distributed validator client. That means Charon doesn’t have access to the validator private key shares and cannot sign anything on demand. Instead, operators import the key shares into industry-standard validator clients (VC) that are configured to connect to their local Charon client instead of their local Beacon node directly.
+
+Charon, therefore, serves the [Ethereum Beacon Node API](https://ethereum.github.io/beacon-APIs/#/) from the `ValidatorAPI` component and intercepts some endpoints while proxying other endpoints directly to the upstream Beacon node.
+
+The VC queries the `ValidatorAPI` for unsigned data which is retrieved from the `DutyDB`. It then signs it and submits it back to the `ValidatorAPI` which stores it in the `PartialSignatureDB`.
+
+### **Share** partial signatures
+
+The `PartialSignatureDB` stores the partially signed data submitted by the local Charon client’s VC. But it also stores all the partial signatures submitted by the VCs of other peers in the cluster. This is achieved by the `PartialSignatureExchange` component that exchanges partial signatures between all peers in the cluster. All charon clients, therefore, store all partial signatures the cluster generates.
+
+### **Threshold Aggregate** partial signatures
+
+The `SignatureAggregator` is invoked as soon as sufficient (any M of N) partial signatures are stored in the `PartialSignatureDB`. It performs BLS threshold aggregation of the partial signatures resulting in a final signature that is valid for the beacon chain.
+
+### **Broadcast** final signature
+
+Finally, the `Broadcaster` component broadcasts the final threshold aggregated signature to the Beacon client, thereby completing the duty.
+
+### Ports
+
+The following is an outline of the services that can be exposed by charon.
+
+* **:3600** - The validator REST API. This is the port that serves the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/). This is the port validator clients should talk to instead of their standard consensus client REST API port. Charon subsequently proxies these requests to the upstream consensus client specified by `--beacon-node-endpoints`.
+* **:3610** - Charon P2P port. This is the port that charon clients use to communicate with one another via TCP. This endpoint should be port-forwarded on your router and exposed publicly, preferably on a static IP address. This IP address should then be set on the charon run command with `--p2p-external-ip` or `CHARON_P2P_EXTERNAL_IP`.
+* **:3620** - Monitoring port. This port hosts a webserver that serves prometheus metrics on `/metrics`, a readiness endpoint on `/readyz` and a liveness endpoint on `/livez`, and a pprof server on `/debug/pprof`. This port should not be exposed publicly.
+
+## Getting started
+
+For more information on running charon, take a look at our [Quickstart Guides](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.18.0/charon/networking.md b/docs/versioned_docs/version-v0.18.0/charon/networking.md
new file mode 100644
index 0000000000..076981a5c4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/charon/networking.md
@@ -0,0 +1,84 @@
+---
+sidebar_position: 4
+description: Networking
+---
+
+# Charon networking
+
+## Overview
+
+This document describes Charon's networking model which can be divided into two parts: the [_internal validator stack_](networking.md#internal-validator-stack) and the [_external p2p network_](networking.md#external-p2p-network).
+
+## Internal Validator Stack
+
+\
+
+
+Charon is a middleware DVT client and is therefore connected to an upstream beacon node and a downstream validator client is connected to it. Each operator should run the whole validator stack (all 4 client software types), either on the same machine or on different machines. The networking between the nodes should be private and not exposed to the public internet.
+
+Related Charon configuration flags:
+
+* `--beacon-node-endpoints`: Connects Charon to one or more beacon nodes.
+* `--validator-api-address`: Address for Charon to listen on and serve requests from the validator client.
+
+## External P2P Network
+
+ The Charon clients in a DV cluster are connected to each other via a small p2p network consisting of only the clients in the cluster. Peer IP addresses are discovered via an external "relay" server. The p2p connections are over the public internet so the charon p2p port must be publicly accessible. Charon leverages the popular [libp2p](https://libp2p.io/) protocol.
+
+Related [Charon configuration flags](charon-cli-reference.md):
+
+* `--p2p-tcp-addresses`: Addresses for Charon to listen on and serve p2p requests.
+* `--p2p-relays`: Connect charon to one or more relay servers.
+* `--private-key-file`: Private key identifying the charon client.
+
+### LibP2P Authentication and Security
+
+Each charon client has a secp256k1 private key. The associated public key is encoded into the [cluster lock file](cluster-configuration.md#Cluster-Lock-File) to identify the nodes in the cluster. For ease of use and to align with the Ethereum ecosystem, Charon encodes these public keys in the [ENR format](https://eips.ethereum.org/EIPS/eip-778), not in [libp2p’s Peer ID format](https://docs.libp2p.io/concepts/fundamentals/peers/).
+
+:::warning Each Charon node's secp256k1 private key is critical for authentication and must be kept secure to prevent cluster compromise.
+
+Do not use the same key across multiple clusters, as this can lead to security issues.
+
+For more on p2p security, refer to [libp2p's article](https://docs.libp2p.io/concepts/security/security-considerations). :::
+
+Charon currently only supports libp2p tcp connections with [noise](https://noiseprotocol.org/) security and only accepts incoming libp2p connections from peers defined in the cluster lock.
+
+### LibP2P Relays and Peer Discovery
+
+Relays are simple libp2p servers that are publicly accessible supporting the [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) protocol. Circuit-relay is a libp2p transport protocol that routes traffic between two peers over a third-party “relay” peer.
+
+Obol hosts a publicly accessible relay at https://0.relay.obol.tech and will work with other organisations in the community to host alternatives Anyone can host their own relay server for their DV cluster.
+
+Each charon node knows which peers are in the cluster from the ENRs in the cluster lock file, but their IP addresses are unknown. By connecting to the same relay, nodes establish “relay connections” to each other. Once connected via relay they exchange their known public addresses via libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol. The relay connection is then upgraded to a direct connection. If a node’s public IP changes, nodes once again connect via relay, exchange the new IP, and then connect directly once again.
+
+Note that in order for two peers to discover each other, they must connect to the same relay. Cluster operators should therefore coordinate which relays to use.
+
+Libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol attempts to automatically detect the public IP address of a charon client without the need to explicitly configure it. If this however fails, the following two configuration flags can be used to explicitly set the publicly advertised address:
+
+* `--p2p-external-ip`: Explicitly sets the external IP address.
+* `--p2p-external-hostname`: Explicitly sets the external DNS host name.
+
+:::warning If a pair of charon clients are not publicly accessible, due to being behind a NAT, they will not be able to upgrade their relay connections to a direct connection. Even though this is supported, it isn’t recommended as relay connections introduce additional latency and reduced throughput and will result in decreased validator effectiveness and possible missed block proposals and attestations. :::
+
+Libp2p’s circuit-relay connections are end-to-end encrypted, even though relay servers accept connections between nodes from multiple different clusters, relays are merely routing opaque connections. And since Charon only accepts incoming connections from other peers in its cluster, the use of a relay doesn’t allow connections between clusters.
+
+Only the following three libp2p protocols are established between a charon node and a relay itself:
+
+* [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/): To establish relay e2e encrypted connections between two peers in a cluster.\
+
+* [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify): Auto-detection of public IP addresses to share with other peers in the cluster.\
+
+* [peerinfo](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfo.go): Exchanges basic application [metadata](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfopb/v1/peerinfo.proto) for improved operational metrics and observability.\
+
+
+All other charon protocols are only established between nodes in the same cluster.
+
+### Scalable Relay Clusters
+
+In order for a charon client to connect to a relay, it needs the relay's [multiaddr](https://docs.libp2p.io/concepts/fundamentals/addressing/) (containing its public key and IP address). But a single multiaddr can only point to a single relay server which can easily be overloaded if too many clusters connect to it. Charon therefore supports resolving a relay’s multiaddr via HTTP GET request. Since charon also includes the unique `cluster-hash` header in this request, the relay provider can use [consistent header-based load-balancing](https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_steering_header-based_routing) to map clusters to one of many relays using a single HTTP address.
+
+The relay supports serving its runtime public multiaddrs via its `--http-address` flag.
+
+E.g., https://0.relay.obol.tech is actually a load-balancer that routes HTTP requests to one of many relays based on the `cluster-hash` header returning the target relay’s multiaddr which the charon client then uses to connect to that relay.
+
+The charon `--p2p-relays` flag therefore supports both multiaddrs as well as HTTP URls.
diff --git a/docs/versioned_docs/version-v0.18.0/dvl/README.md b/docs/versioned_docs/version-v0.18.0/dvl/README.md
new file mode 100644
index 0000000000..1b694a8473
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/dvl/README.md
@@ -0,0 +1,2 @@
+# dvl
+
diff --git a/docs/versioned_docs/version-v0.18.0/dvl/intro.md b/docs/versioned_docs/version-v0.18.0/dvl/intro.md
new file mode 100644
index 0000000000..9eb7883d60
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/dvl/intro.md
@@ -0,0 +1,18 @@
+---
+sidebar_position: 1
+description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Introduction
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: [**The DV Launchpad**](https://goerli.launchpad.obol.tech/).
+
+## Getting started
+
+For more information on running charon in a UI friendly way through the DV Launchpad, take a look at our [Quickstart Guides](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.18.0/fr/README.md b/docs/versioned_docs/version-v0.18.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.18.0/fr/eth.md b/docs/versioned_docs/version-v0.18.0/fr/eth.md
new file mode 100644
index 0000000000..5d0e258f40
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/fr/eth.md
@@ -0,0 +1,49 @@
+# Ethereum and its Relationship with DVT
+
+Our goal for this page is to equip you with the foundational knowledge needed to actively contribute to the advancement of Obol while also directing you to valuable Ethereum and DVT related resources. Additionally, we will shed light on the intersection of DVT and Ethereum, offering curated articles and blog posts to enhance your understanding.
+
+## **Understanding Ethereum**
+
+To grasp the current landscape of Ethereum's PoS development, we encourage you to delve into the wealth of information available on the [Official Ethereum Website.](https://ethereum.org/en/learn/) The Ethereum website serves as a hub for all things Ethereum, catering to individuals at various levels of expertise, whether you're just starting your journey or are an Ethereum veteran. Here, you'll find a trove of resources that cater to diverse learning needs and preferences, ensuring that there's something valuable for everyone in the Ethereum community to discover.
+
+## **DVT & Ethereum**
+
+### Distributed Validator Technology
+
+> "Distributed validator technology (DVT) is an approach to validator security that spreads out key management and signing responsibilities across multiple parties, to reduce single points of failure, and increase validator resiliency.
+>
+> It does this by splitting the private key used to secure a validator across many computers organized into a "cluster". The benefit of this is that it makes it very difficult for attackers to gain access to the key, because it is not stored in full on any single machine. It also allows for some nodes to go offline, as the necessary signing can be done by a subset of the machines in each cluster. This reduces single points of failure from the network and makes the whole validator set more robust." _(ethereum.org, 2023)_
+
+#### Learn More About Distributed Validator technology from [The Official Ethereum Website](https://ethereum.org/en/staking/dvt/)
+
+### How Does DVT Improve Staking on Ethereum?
+
+If you haven’t yet heard, Distributed Validator Technology, or DVT, is the next big thing on The Merge section of the Ethereum roadmap. Learn more about this in our blog post: [What is DVT and How Does It Improve Staking on Ethereum?](https://blog.obol.tech/what-is-dvt-and-how-does-it-improve-staking-on-ethereum/)
+
+
+
+_**Vitalik's Ethereum Roadmap**_
+
+### Deep Dive Into DVT and Charon’s Architecture
+
+Minimizing correlation is vital when designing DVT as Ethereum Proof of Stake is designed to heavily punish correlated behavior. In designing Obol, we’ve made careful choices to create a trust-minimized and non-correlated architecture.
+
+[**Read more about Designing Non-Correlation Here**](https://blog.obol.tech/deep-dive-into-dvt-and-charons-architecture/)
+
+### Performance Testing Distributed Validators
+
+In our mission to help make Ethereum consensus more resilient and decentralised with distributed validators (DVs), it’s critical that we do not compromise on the performance and effectiveness of validators. Earlier this year, we worked with MigaLabs, the blockchain ecosystem observatory located in Barcelona, to perform an independent test to validate the performance of Obol DVs under different configurations and conditions. After taking a few weeks to fully analyse the results together with MigaLabs, we’re happy to share the results of these performance tests.
+
+[**Read More About The Performance Test Results Here**](https://blog.obol.tech/performance-testing-distributed-validators/)
+
+
+
+### More Resources
+
+* [Sorting out Distributed Validator Technology](https://medium.com/nethermind-eth/sorting-out-distributed-validator-technology-a6f8ca1bbce3)
+* [A tour of Verifiable Secret Sharing schemes and Distributed Key Generation protocols](https://medium.com/nethermind-eth/a-tour-of-verifiable-secret-sharing-schemes-and-distributed-key-generation-protocols-3c814e0d47e1)
+* [Threshold Signature Schemes](https://medium.com/nethermind-eth/threshold-signature-schemes-36f40bc42aca)
+
+#### References
+
+* ethereum.org. (2023). Distributed Validator Technology. \[online] Available at: https://ethereum.org/en/staking/dvt/ \[Accessed 25 Sep. 2023].
diff --git a/docs/versioned_docs/version-v0.18.0/fr/golang.md b/docs/versioned_docs/version-v0.18.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.18.0/int/README.md b/docs/versioned_docs/version-v0.18.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.18.0/int/faq/README.md b/docs/versioned_docs/version-v0.18.0/int/faq/README.md
new file mode 100644
index 0000000000..456ad9139a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/faq/README.md
@@ -0,0 +1,2 @@
+# faq
+
diff --git a/docs/versioned_docs/version-v0.18.0/int/faq/dkg_failure.md b/docs/versioned_docs/version-v0.18.0/int/faq/dkg_failure.md
new file mode 100644
index 0000000000..33ffe9c496
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/faq/dkg_failure.md
@@ -0,0 +1,82 @@
+---
+sidebar_position: 4
+description: Handling DKG failure
+---
+
+# Handling DKG failure
+
+While the DKG process has been tested and validated against many different configuration instances, it can still encounter issues which might result in failure.
+
+Our DKG is designed in a way that doesn't allow for inconsistent results: either it finishes correctly for every peer, or it fails.
+
+This is a **safety** feature: you don't want to deposit an Ethereum distributed validator that not every operator is able to participate to, or even reach threshold for.
+
+The most common source of issues lies in the network stack: if any of the peer's Internet connection glitches substantially, the DKG will fail.
+
+Charon's DKG doesn't allow peer reconnection once the process is started, but it does allow for re-connections before that.
+
+When you see the following message:
+
+```
+14:08:34.505 INFO dkg Waiting to connect to all peers...
+```
+
+this means your Charon instance is waiting for all the other cluster peers to start their DKG process: at this stage, peers can disconnect and reconnect at will, the DKG process will still continue.
+
+A log line will confirm the connection of a new peer:
+
+```
+14:08:34.523 INFO dkg Connected to peer 1 of 3 {"peer": "fantastic-adult"}
+14:08:34.529 INFO dkg Connected to peer 2 of 3 {"peer": "crazy-bunch"}
+14:08:34.673 INFO dkg Connected to peer 3 of 3 {"peer": "considerate-park"}
+```
+
+As soon as all the peers are connected, this message will be shown:
+
+```
+14:08:34.924 INFO dkg All peers connected, starting DKG ceremony
+```
+
+Past this stage **no disconnections are allowed**, and _all peers must leave their terminals open_ in order for the DKG process to complete: this is a synchronous phase, and every peer is required in order to reach completion.
+
+If for some reason the DKG process fails, you would see error logs that resemble this:
+
+```
+14:28:46.691 ERRO cmd Fatal error: sync step: p2p connection failed, please retry DKG: context canceled
+```
+
+As the error message suggests, the DKG process needs to be retried.
+
+## Cleaning up the `.charon` directory
+
+One cannot simply retry the DKG process: Charon refuses to overwrite any runtime file in order to avoid inconsistencies and private key loss.
+
+When attempting to re-run a DKG with an unclean data directory -- which is either `.charon` or what was specified with the `--data-dir` CLI parameter -- this is the error that will be shown:
+
+```
+14:44:13.448 ERRO cmd Fatal error: data directory not clean, cannot continue {"disallowed_entity": "cluster-lock.json", "data-dir": "/compose/node0"}
+```
+
+The `disallowed_entity` field lists all the files that Charon refuses to overwrite, while `data-dir` is the full path of the runtime directory the DKG process is using.
+
+In order to retry the DKG process one must delete the following entities, if present:
+
+ - `validator_keys` directory
+ - `cluster-lock.json` file
+ - `deposit-data.json` file
+:::warning
+The `charon-enr-private-key` file **must be preserved**, failure to do so requires the DKG process to be restarted from the beginning by creating a new cluster definition.
+:::
+If you're doing a DKG with a custom cluster definition - for example, create with `charon create dkg` rather than the Obol Launchpad - you can re-use the same file.
+
+Once this process has been completed, the cluster operators can retry a DKG.
+
+## Further debugging
+
+If for some reason the DKG process fails again, node operators are adviced to reach out to the Obol team by opening an [issue](https://github.com/ObolNetwork/charon/issues), detailing what troubleshooting steps were taken and providing **debug logs**.
+
+To enable debug logs first clean up the Charon data directory as explained in [the previous paragraph](#cleaning-up-the-charon-directory), then run your DKG command by appending `--log-level=debug` at the end.
+
+In order for the Obol team to debug your issue as quickly and precisely as possible please provide full logs in textual form, not through screenshots or display photos.
+
+Providing complete logs is particularly important, since it allows the team to reconstruct precisely what happened.
diff --git a/docs/versioned_docs/version-v0.18.0/int/faq/general.md b/docs/versioned_docs/version-v0.18.0/int/faq/general.md
new file mode 100644
index 0000000000..b0a679eaf9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/faq/general.md
@@ -0,0 +1,68 @@
+---
+sidebar_position: 1
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+## General
+
+### Does Obol have a token?
+
+No. Distributed validators use only Ether.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/n6ebKsX46w) too.
+
+### Where does the name Charon come from?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) \[kharon] is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
+
+### What are the hardware requirements for running a Charon node?
+
+Charon alone uses negligible disk space of not more than a few MBs. However, if you are running your consensus client and execution client on the same server as charon, then you will typically need the same hardware as running a full Ethereum node:
+
+At minimum:
+
+* A CPU with 2+ physical cores (or 4 vCPUs)
+* 8GB RAM
+* 1.5TB+ free SSD disk space (for mainnet)
+* 10mb/s internet bandwidth
+
+Recommended specifications:
+
+* A CPU with 4+ physical cores
+* 16GB+ RAM
+* 2TB+ free disk on a high performance SSD (e.g. NVMe)
+* 25mb/s internet bandwidth
+
+For more hardware considerations, check out the [ethereum.org guides](https://ethereum.org/en/developers/docs/nodes-and-clients/run-a-node/#environment-and-hardware) which explores various setups and trade-offs, such as running the node locally or in the cloud.
+
+For now, Geth, Teku & Lighthouse clients are packaged within the docker compose file provided in the [quickstart guides](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/group/README.md), so you don't have to install anything else to run a cluster. Just make sure you give them some time to sync once you start running your node.
+
+### What is the difference between a node, a validator and a cluster?
+
+A node is a single instance of Ethereum EL+CL clients that can communicate with other nodes to maintain the Ethereum blockchain.
+
+A validator is a node that participates in the consensus process by verifying transactions and creating new blocks. Multiple validators can run from the same node.
+
+A cluster is a group of nodes that act together as one or several validators, which allows for a more efficient use of resources, reduces operational costs, and provides better reliability and fault tolerance.
+
+### Can I migrate an existing Charon node to a new machine?
+
+It is possible to migrate your Charon node to another machine running the same config by moving the `.charon` folder with its contents to your new machine. Make sure the EL and CL on the new machine are synced before proceeding to the move to minimize downtime.
+
+## Distributed Key Generation
+
+### What are the min and max numbers of operators for a Distributed Validator?
+
+Currently, the minimum is 4 operators with a threshold of 3.
+
+The threshold (aka quorum) corresponds to the minimum numbers of operators that need to be active for the validator(s) to be able to perform its duties. It is defined by the following formula `n-(ceil(n/3)-1)`. We strongly recommend using this default threshold in your DKG as it maximises liveness while maintaining BFT safety. Setting a 4 out of 4 cluster for example, would make your validator more vulnerable to going offline instead of less vulnerable. You can check the recommended threshold values for a cluster [here](../key-concepts.md#distributed-validator-threshold).
+
+## Debugging Errors in Logs
+
+You can check if the containers on your node are outputting errors by running `docker compose logs` on a machine with a running cluster.
+
+Diagnose some common errors and view their resolutions [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/faq/errors.mdx).
diff --git a/docs/versioned_docs/version-v0.18.0/int/faq/risks.md b/docs/versioned_docs/version-v0.18.0/int/faq/risks.md
new file mode 100644
index 0000000000..eccd9af3bb
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/faq/risks.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 3
+description: Centralization Risks and mitigation
+---
+
+# Centralization risks and mitigation
+
+## Risk: Obol hosting the relay infrastructure
+**Mitigation**: Self-host a relay
+
+One of the risks associated with Obol hosting the [LibP2P relays](../../charon/networking.md) infrastructure allowing peer discovery is that if Obol-hosted relays go down, peers won't be able to discover each other and perform the DKG. To mitigate this risk, external organizations and node operators can consider self-hosting a relay. This way, if Obol's relays go down, the clusters can still operate through other relays in the network. Ensure that all nodes in the cluster use the same relays, or they will not be able to find each other if they are connected to different relays.
+
+The following non-Obol entities run relays that you can consider adding to your cluster (you can have more than one per cluster, see the `--p2p-relays` flag of [`charon run`](../../charon/charon-cli-reference.md#the-run-command)):
+
+| Entity | Relay URL |
+|-----------|---------------------------------------|
+| [DSRV](https://www.dsrvlabs.com/) | https://charon-relay.dsrvlabs.dev |
+| [Infstones](https://infstones.com/) | https://obol-relay.infstones.com:3640/ |
+| [Hashquark](https://www.hashquark.io/) | https://relay-2.prod-relay.721.land/ |
+| [Figment](https://figment.io/) | https://relay-1.obol.figment.io/ |
+
+## Risk: Obol being able to update Charon code
+**Mitigation**: Pin specific docker versions or compile from source on a trusted commit
+
+Another risk associated with Obol is having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) running on the network which could introduce vulnerabilities or malicious code. To mitigate this risk, operators can consider pinning specific versions of the code that have been thoroughly tested and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community.
+
+## Risk: Obol hosting the DV Launchpad
+**Mitigation**: Use [`create cluster`](../../charon/charon-cli-reference.md) or [`create dkg`](../../charon/charon-cli-reference.md) locally and distribute the files manually
+
+Hosting the first Charon frontend, the [DV Launchpad](../../dvl/intro.md), on a centralized server could create a single point of failure, as users would have to rely on Obol's server to access the protocol. This could limit the decentralization of the protocol and could make it vulnerable to attacks or downtime. Obol hosting the launchpad on a decentralized network, such as IPFS is a first step but not enough. This is why the Charon code is open-source and contains a CLI interface to interact with the protocol locally.
+
+To mitigate the risk of launchpad failure, consider using the `create cluster` or `create dkg` commands locally and distributing the key shares files manually.
+
+
+## Risk: Obol going bust/rogue
+**Mitigation**: Use key recovery
+
+The final centralization risk associated with Obol is the possibility of the company going bankrupt or acting maliciously, which would lead to a loss of control over the network and potentially cause damage to the ecosystem. To mitigate this risk, Obol has implemented a key recovery mechanism. This would allow the clusters to continue operating and to retrieve full private keys even if Obol is no longer able to provide support.
+
+A guide to recombine key shares into a single private key can be accessed [here](../quickstart/advanced/quickstart-combine.md).
diff --git a/docs/versioned_docs/version-v0.18.0/int/key-concepts.md b/docs/versioned_docs/version-v0.18.0/int/key-concepts.md
new file mode 100644
index 0000000000..4747da0118
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/key-concepts.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is possible with the use of **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes some of the single points of failure in validation. Should <33% of the participating nodes in a DV cluster go offline, the remaining active nodes can still come to consensus on what to sign and can produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes Geth, Lighthouse, Charon and Teku.
+
+### Execution Client
+
+
+
+An execution client (formerly known as an Eth1 client) specializes in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/charon/intro/README.md).
+
+### Validator Client
+
+
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Threshold
+
+The number of nodes in a cluster that needs to be online and honest for their distributed validators to be online is outlined in the following table.
+
+| Cluster Size | Threshold | Note |
+| :----------: | :-------: | --------------------------------------------- |
+| 4 | 3/4 | Minimum threshold |
+| 5 | 4/5 | |
+| 6 | 4/6 | Minimum to tolerate two offline nodes |
+| 7 | 5/7 | Minimum to tolerate two **malicious** nodes |
+| 8 | 6/8 | |
+| 9 | 6/9 | Minimum to tolerate three offline nodes |
+| 10 | 7/10 | Minimum to tolerate three **malicious** nodes |
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata. Read more about these ceremonies [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/charon/dkg/README.md).
diff --git a/docs/versioned_docs/version-v0.18.0/int/overview.md b/docs/versioned_docs/version-v0.18.0/int/overview.md
new file mode 100644
index 0000000000..4a72c17f27
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mitigate these risks, community building and credible neutrality must be used as primary design principles.
+
+Obol is focused on scaling consensus by providing permissionless access to Distributed Validators (DV's). We believe that distributed validators will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption, the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing infrastructure.
+
+Similar to how roll-up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/dvl/intro/README.md), a [User Interface](https://goerli.launchpad.obol.tech/) for bootstrapping Distributed Validators
+* [Charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/charon/intro/README.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Splits](../sc/introducing-obol-splits.md), a set of solidity smart contracts for the distribution of rewards from Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivized testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+### The Vision
+
+The road to decentralizing stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+
+
+The first version of distributed validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivization is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivization scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivization alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivization layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the delineation between the two.
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/README.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/README.md
new file mode 100644
index 0000000000..bd2483c7cf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/README.md
@@ -0,0 +1,2 @@
+# quickstart
+
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/activate-dv.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/activate-dv.md
new file mode 100644
index 0000000000..44fad69b6b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/activate-dv.md
@@ -0,0 +1,54 @@
+---
+sidebar_position: 4
+description: Activate the Distributed Validator using the deposit contract
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Activate a DV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+If you have successfully created a distributed validator and you are ready to activate it, congratulations! 🎉
+
+Once you have connected all of your charon clients together, synced all of your ethereum nodes such that the monitoring indicates that they are all healthy and ready to operate, **ONE operator** may proceed to deposit and activate the validator(s).
+
+The `deposit-data.json` to be used to deposit will be located in each operator's `.charon` folder. The copies across every node should be identical and any of them can be uploaded.
+
+:::warning
+If you are being given a `deposit-data.json` file that you didn't generate yourself, please take extreme care to ensure this operator has not given you a malicious `deposit-data.json` file that is not the one you expect. Cross reference the files from multiple operators if there is any doubt. Activating the wrong validator or an invalid deposit could result in complete theft or loss of funds.
+:::
+
+Use any of the following tools to deposit. Please use the third-party tools at your own risk and always double check the staking contract address.
+
+
+
+
+
+
+
+ - Obol Distributed Validator Launchpad
+ - ethereum.org Staking Launchpad
+ - From a SAFE Multisig (Repeat these steps for every validator to deposit in your cluster)
+
+ - From the SAFE UI, click on
New Transaction
then Transaction Builder
to create a new custom transaction
+ - Enter the beacon chain contract for Deposit on mainnet - you can find it here
+ - Fill the transaction information
+ - Set amount to
32
in ETH
+ - Use your
deposit-data.json
to fill the required data : pubkey
,withdrawal credentials
,signature
,deposit_data_root
. Make sure to prefix the input with 0x
to format them in bytes
+ - Click on
Add transaction
+ - Click on
Create Batch
+ - Click on
Send Batch
, you can click on Simulate to check if the transaction will execute successfully
+ - Get the minimum threshold of signatures from the other addresses and execute the custom transaction
+
+
+
+
+
+The activation process can take a minimum of 16 hours, with the maximum time to activation being dictated by the length of the activation queue, which can be weeks.
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/README.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/README.md
new file mode 100644
index 0000000000..965416d689
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/README.md
@@ -0,0 +1,2 @@
+# advanced
+
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/adv-docker-configs.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/adv-docker-configs.md
new file mode 100644
index 0000000000..d14de53e8b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/adv-docker-configs.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 8
+description: Use advanced docker-compose features to have more flexibility and power to change the default configuration.
+---
+
+# Advanced Docker Configs
+
+:::info
+This section is intended for *docker power users*, i.e., for those who are familiar with working with `docker-compose` and want to have more flexibility and power to change the default configuration.
+:::
+
+We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in `docker-compose.yml` without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
+See [this](https://docs.docker.com/compose/extends/#multiple-compose-files) for more details.
+
+There are some additional compose files in [this repository](https://github.com/ObolNetwork/charon-distributed-validator-node/), `compose-debug.yml` and `docker-compose.override.yml.sample`, along-with the default `docker-compose.yml` file that you can use for this purpose.
+
+- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f compose-debug.yml up
+```
+
+- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
+
+- To use it, just copy the sample file to `docker-compose.override.yml` and customise it to your liking. Please create this file ONLY when you want to tweak something. This is because the default override file is empty and docker errors if you provide an empty compose file.
+
+```
+cp docker-compose.override.yml.sample docker-compose.override.yml
+
+# Tweak docker-compose.override.yml and then run docker compose up
+docker compose up
+```
+
+- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
+```
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/monitoring.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/monitoring.md
new file mode 100644
index 0000000000..fdbec169b9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/monitoring.md
@@ -0,0 +1,100 @@
+---
+sidebar_position: 4
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+# Getting Started Monitoring your Node
+
+Welcome to this comprehensive guide, designed to assist you in effectively monitoring your Charon cluster and nodes, and setting up alerts based on specified parameters.
+
+## Pre-requisites
+
+Ensure the following software are installed:
+
+- Docker: Find the installation guide for Ubuntu **[here](https://docs.docker.com/engine/install/ubuntu/)**
+- Prometheus: You can install it using the guide available **[here](https://prometheus.io/docs/prometheus/latest/installation/)**
+- Grafana: Follow this **[link](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)** to install Grafana
+
+## Import Pre-Configured Charon Dashboards
+
+- Navigate to the **[repository](https://github.com/ObolNetwork/monitoring/tree/main/dashboards)** that contains a variety of Grafana dashboards. For this demonstration, we will utilize the Charon Dashboard json.
+
+- In your Grafana interface, create a new dashboard and select the import option.
+
+- Copy the content of the Charon Dashboard json from the repository and paste it into the import box in Grafana. Click "Load" to proceed.
+
+- Finalize the import by clicking on the "Import" button. At this point, your dashboard should begin displaying metrics. Ensure your Charon client and Prometheus are operational for this to occur.
+
+## Example Alerting Rules
+
+To create alerts for Node-Exporter, follow these steps based on the sample rules provided on the "Awesome Prometheus alerts" page:
+
+1. Visit the **[Awesome Prometheus alerts](https://samber.github.io/awesome-prometheus-alerts/rules.html#host-and-hardware)** page. Here, you will find lists of Prometheus alerting rules categorized by hardware, system, and services.
+
+2. Depending on your need, select the category of alerts. For example, if you want to set up alerts for your system's CPU usage, click on the 'CPU' under the 'Host & Hardware' category.
+
+3. On the selected page, you'll find specific alert rules like 'High CPU Usage'. Each rule will provide the PromQL expression, alert name, and a brief description of what the alert does. You can copy these rules.
+
+4. Paste the copied rules into your Prometheus configuration file under the `rules` section. Make sure you understand each rule before adding it to avoid unnecessary alerts.
+
+5. Finally, save and apply the configuration file. Prometheus should now trigger alerts based on these rules.
+
+
+For alerts specific to Charon/Alpha, refer to the alerting rules available on this [ObolNetwork/monitoring](https://github.com/ObolNetwork/monitoring/tree/main/alerting-rules).
+
+## Understanding Alert Rules
+
+1. `ClusterBeaconNodeDown`This alert is activated when the beacon node in a specified Alpha cluster is offline. The beacon node is crucial for validating transactions and producing new blocks. Its unavailability could disrupt the overall functionality of the cluster.
+2. `ClusterBeaconNodeSyncing`This alert indicates that the beacon node in a specified Alpha cluster is synchronizing, i.e., catching up with the latest blocks in the cluster.
+3. `ClusterNodeDown`This alert is activated when a node in a specified Alpha cluster is offline.
+4. `ClusterMissedAttestations`:This alert indicates that there have been missed attestations in a specified Alpha cluster. Missed attestations may suggest that validators are not operating correctly, compromising the security and efficiency of the cluster.
+5. `ClusterInUnknownStatus`: This alert is designed to activate when a node within the cluster is detected to be in an unknown state. The condition is evaluated by checking whether the maximum of the app_monitoring_readyz metric is 0.
+6. `ClusterInsufficientPeers`:This alert is set to activate when the number of peers for a node in the Alpha M1 Cluster #1 is insufficient. The condition is evaluated by checking whether the maximum of the **`app_monitoring_readyz`** equals 4.
+7. `ClusterFailureRate`: This alert is activated when the failure rate of the Alpha M1 Cluster #1 exceeds a certain threshold.
+8. `ClusterVCMissingValidators`: This alert is activated if any validators in the Alpha M1 Cluster #1 are missing.
+9. `ClusterHighPctFailedSyncMsgDuty`: This alert is activated if a high percentage of sync message duties failed in the cluster. The alert is activated if the sum of the increase in failed duties tagged with "sync_message" in the last hour divided by the sum of the increase in total duties tagged with "sync_message" in the last hour is greater than 0.1.
+10. `ClusterNumConnectedRelays`: This alert is activated if the number of connected relays in the cluster falls to 0.
+11. PeerPingLatency: 1. This alert is activated if the 90th percentile of the ping latency to the peers in a cluster exceeds 500ms within 2 minutes.
+
+## Best Practices for Monitoring Charon Nodes & Cluster
+
+- **Establish Baselines**: Familiarize yourself with the normal operation metrics like CPU, memory, and network usage. This will help you detect anomalies.
+- **Define Key Metrics**: Set up alerts for essential metrics, encompassing both system-level and Charon-specific ones.
+- **Configure Alerts**: Based on these metrics, set up actionable alerts.
+- **Monitor Network**: Regularly assess the connectivity between nodes and the network.
+- **Perform Regular Health Checks**: Consistently evaluate the status of your nodes and clusters.
+- **Monitor System Logs**: Keep an eye on logs for error messages or unusual activities.
+- **Assess Resource Usage**: Ensure your nodes are neither over- nor under-utilized.
+- **Automate Monitoring**: Use automation to ensure no issues go undetected.
+- **Conduct Drills**: Regularly simulate failure scenarios to fine-tune your setup.
+- **Update Regularly**: Keep your nodes and clusters updated with the latest software versions.
+
+## Third-Party Services for Uptime Testing
+
+- [updown.io](https://updown.io/)
+- [Grafana synthetic Monitoring](https://grafana.com/grafana/plugins/grafana-synthetic-monitoring-app/)
+
+## Key metrics to watch to verify node health based on jobs
+
+- Node Exporter:
+
+**CPU Usage**: High or spiking CPU usage can be a sign of a process demanding more resources than it should.
+
+**Memory Usage**: If a node is consistently running out of memory, it could be due to a memory leak or simply under-provisioning.
+
+**Disk I/O**: Slow disk operations can cause applications to hang or delay responses. High disk I/O can indicate storage performance issues or a sign of high load on the system.
+
+**Network Usage**: High network traffic or packet loss can signal network configuration issues, or that a service is being overwhelmed by requests.
+
+**Disk Space**: Running out of disk space can lead to application errors and data loss.
+
+**Uptime**: The amount of time a system has been up without any restarts. Frequent restarts can indicate instability in the system.
+
+**Error Rates**: The number of errors encountered by your application. This could be 4xx/5xx HTTP errors, exceptions, or any other kind of error your application may log.
+
+**Latency**: The delay before a transfer of data begins following an instruction for its transfer.
+
+It is also important to check:
+
+- NTP clock skew
+- Process restarts and failures (eg. through `node_systemd`)
+- alert on high error and panic log counts.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/obol-monitoring.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/obol-monitoring.md
new file mode 100644
index 0000000000..8d9e0ceca1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/obol-monitoring.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 5
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+
+# Push Metrics to Obol Monitoring
+
+:::info
+This is **optional** and does not confer any special privileges within the Obol Network.
+:::
+
+You may have been provided with **Monitoring Credentials** used to push distributed validator metrics to Obol's central prometheus cluster to monitor, analyze, and improve your Distributed Validator Cluster's performance.
+
+The provided credentials needs to be added in `prometheus/prometheus.yml` replacing `$PROM_REMOTE_WRITE_TOKEN` and will look like:
+```
+obol20!tnt8U!C...
+```
+
+The updated `prometheus/prometheus.yml` file should look like:
+```
+global:
+ scrape_interval: 30s # Set the scrape interval to every 30 seconds.
+ evaluation_interval: 30s # Evaluate rules every 30 seconds.
+
+remote_write:
+ - url: https://vm.monitoring.gcp.obol.tech/write
+ authorization:
+ credentials: obol20!tnt8U!C...
+
+scrape_configs:
+ - job_name: 'charon'
+ static_configs:
+ - targets: ['charon:3620']
+ - job_name: "lodestar"
+ static_configs:
+ - targets: [ "lodestar:5064" ]
+ - job_name: 'node-exporter'
+ static_configs:
+ - targets: ['node-exporter:9100']
+```
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-builder-api.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-builder-api.md
new file mode 100644
index 0000000000..b6af1be01f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-builder-api.md
@@ -0,0 +1,163 @@
+---
+sidebar_position: 2
+description: Run a distributed validator cluster with the builder API (MEV-Boost)
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Run a cluster with MEV enabled
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+This quickstart guide focuses on configuring the builder API for Charon and supported validator and consensus clients.
+
+## Getting started with Charon & the Builder API
+
+Running a distributed validator cluster with the builder API enabled will give the validators in the cluster access to the builder network. This builder network is a network of "Block Builders"
+who work with MEV searchers to produce the most valuable blocks a validator can propose.
+
+[MEV-Boost](https://boost.flashbots.net/) is one such product from flashbots that enables you to ask multiple
+block relays (who communicate with the "Block Builders") for blocks to propose. The block that pays the largest reward to the validator will be signed and returned to the relay for broadcasting to the wider
+network. The end result for the validator is generally an increased APR as they receive some share of the MEV.
+
+:::info
+Before completing this guide, please check your cluster version, which can be found inside the `cluster-lock.json` file. If you are using cluster-lock version `1.7.0` or higher release verions, Obol seamlessly accommodates all validator client implementations within a mev-enabled distributed validator cluster.
+
+For clusters with a cluster-lock version `1.6.0` and below, charon is compatible only with [Teku](https://github.com/ConsenSys/teku). Use the version history feature of this documentation to see the instructions for configuring a cluster in that manner (`v0.16.0`).
+:::
+
+## Client configuration
+
+:::note
+You need to add CLI flags to your consensus client, charon client, and validator client, to enable the builder API.
+
+You need all operators in the cluster to have their nodes properly configured to use the builder API, or you risk missing a proposal.
+:::
+
+### Charon
+
+Charon supports builder API with the `--builder-api` flag. To use builder API, one simply needs to add this flag to the `charon run` command:
+
+```
+charon run --builder-api
+```
+
+### Consensus Clients
+
+The following flags need to be configured on your chosen consensus client. A flashbots relay URL is provided for example purposes, you should choose a relay that suits your preferences from [this list](https://github.com/eth-educators/ethstaker-guides/blob/main/MEV-relay-list.md#mev-relay-list-for-mainnet).
+
+
+
+ Teku can communicate with a single relay directly:
+
+
+ {String.raw`--builder-endpoint="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`--builder-endpoint=http://mev-boost:18550`}
+
+
+
+
+ Lighthouse can communicate with a single relay directly:
+
+
+ {String.raw`lighthouse bn --builder "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`lighthouse bn --builder "http://mev-boost:18550"`}
+
+
+
+
+
+
+ {String.raw`prysm beacon-chain --http-mev-relay "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+
+ {String.raw`--payload-builder=true --payload-builder-url="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ You should also consider adding --local-block-value-boost 3
as a flag, to favour locally built blocks if they are withing 3% in value of the relay block, to improve the chances of a successful proposal.
+
+
+
+
+ {String.raw`--builder --builder.urls "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+### Validator Clients
+
+The following flags need to be configured on your chosen validator client
+
+
+
+
+
+ {String.raw`teku validator-client --validators-builder-registration-default-enabled=true`}
+
+
+
+
+
+
+
+ {String.raw`lighthouse vc --builder-proposals`}
+
+
+
+
+
+
+ {String.raw`prysm validator --enable-builder`}
+
+
+
+
+
+
+ {String.raw`--payload-builder=true`}
+
+
+
+
+
+
+ {String.raw`--builder="true" --builder.selection="builderonly"`}
+
+
+
+
+
+## Verify your cluster is correctly configured
+
+It can be difficult to confirm everything is configured correctly with your cluster until a proposal opportunity arrives, but here are some things you can check.
+
+When your cluster is running, you should see if charon is logging something like this each epoch:
+```
+13:10:47.094 INFO bcast Successfully submitted validator registration to beacon node {"delay": "24913h10m12.094667699s", "pubkey": "84b_713", "duty": "1/builder_registration"}
+```
+
+This indicates that your charon node is successfully registering with the relay for a blinded block when the time comes.
+
+If you are using the [ultrasound relay](https://relay.ultrasound.money), you can enter your cluster's distributed validator public key(s) into their website, to confirm they also see the validator as correctly registered.
+
+You should check that your validator client's logs look healthy, and ensure that you haven't added a `fee-recipient` address that conflicts with what has been selected by your cluster in your cluster-lock file, as that may prevent your validator from producing a signature for the block when the opportunity arises. You should also confirm the same for all of the other peers in your cluster.
+
+Once a proposal has been made, you should look at the `Block Extra Data` field under `Execution Payload` for the block on [Beaconcha.in](https://beaconcha.in/block/18450364), and confirm there is text present, this generally suggests the block came from a builder, and was not a locally constructed block.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-combine.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-combine.md
new file mode 100644
index 0000000000..9d43c90e5c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-combine.md
@@ -0,0 +1,112 @@
+---
+sidebar_position: 9
+description: Combine distributed validator private key shares to recover the validator private key.
+---
+
+# Combine DV private key shares
+
+:::warning
+Reconstituting Distributed Validator private key shares into a standard validator private key is a security risk, and can potentially cause your validator to be slashed.
+
+Only combine private keys as a last resort and do so with extreme caution.
+:::
+
+Combine distributed validator private key shares into an Ethereum validator private key.
+
+## Pre-requisites
+
+- Ensure you have the `.charon` directories of at least a threshold of the cluster's node operators.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Set up the key combination directory tree
+
+Rename each cluster node operator `.charon` directory in a different way to avoid folder name conflicts.
+
+We suggest naming them clearly and distinctly, to avoid confusion.
+
+At the end of this process, you should have a tree like this:
+
+```shell
+$ tree ./cluster
+
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+...
+└── node*
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+:::warning
+Make sure to never mix the various `.charon` directories with one another.
+
+Doing so can potentially cause the combination process to fail.
+:::
+
+## Step 2. Combine the key shares
+
+Run the following command:
+
+```sh
+# Combine a clusters private keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 combine --cluster-dir /opt/charon/cluster --output-dir /opt/charon/combined
+```
+
+This command will create one subdirectory for each validator private key that has been combined, named after its public key.
+
+```shell
+$ tree combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+
+We can verify that the directory names are correct by looking at the lock file:
+
+```shell
+$ jq .distributed_validators[].distributed_public_key cluster/node0/cluster-lock.json
+"0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd"
+"0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106"
+```
+
+:::info
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+Ensure your distributed validator cluster is completely shut down before starting a replacement validator or you are likely to be slashed.
+:::
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-sdk.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-sdk.md
new file mode 100644
index 0000000000..6573d3ecd1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-sdk.md
@@ -0,0 +1,133 @@
+---
+sidebar_position: 1
+description: Create a DV cluster using the Obol Typescript SDK
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Create a DV using the SDK
+
+:::warning
+The Obol-SDK is in a beta state and should be used with caution on testnets only.
+:::
+
+This is a walkthrough of using the [Obol-SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk) to propose a four-node distributed validator cluster for creation using the [DV Launchpad](../../../dvl/intro.md).
+
+## Pre-requisites
+
+- You have [node.js](https://nodejs.org/en) installed.
+
+## Install the package
+
+Install the Obol-SDK package into your development environment
+
+
+
+
+ npm install --save @obolnetwork/obol-sdk
+
+
+
+
+ yarn add @obolnetwork/obol-sdk
+
+
+
+
+## Instantiate the client
+
+The first thing you need to do is create a instance of the Obol SDK client. The client takes two constructor parameters:
+
+- The `chainID` for the chain you intend to use.
+- An ethers.js [signer](https://docs.ethers.org/v6/api/providers/#Signer-signTypedData) object.
+
+```ts
+import { Client } from "@obolnetwork/obol-sdk";
+import { ethers } from "ethers";
+
+// Create a dummy ethers signer object with a throwaway private key
+const mnemonic = ethers.Wallet.createRandom().mnemonic?.phrase || "";
+const privateKey = ethers.Wallet.fromPhrase(mnemonic).privateKey;
+const wallet = new ethers.Wallet(privateKey);
+const signer = wallet.connect(null);
+
+// Instantiate the Obol Client for goerli
+const obol = new Client({ chainId: 5 }, signer);
+```
+
+## Propose the cluster
+
+List the Ethereum addresses of participating operators, along with withdrawal and fee recipient address data for each validator you intend for the operators to create.
+
+```ts
+// A config hash is a deterministic hash of the proposed DV cluster configuration
+const configHash = await obol.createClusterDefinition({
+ name: "SDK Demo Cluster",
+ operators: [
+ { address: "0xC35CfCd67b9C27345a54EDEcC1033F2284148c81" },
+ { address: "0x33807D6F1DCe44b9C599fFE03640762A6F08C496" },
+ { address: "0xc6e76F72Ea672FAe05C357157CfC37720F0aF26f" },
+ { address: "0x86B8145c98e5BD25BA722645b15eD65f024a87EC" },
+ ],
+ validators: [
+ {
+ fee_recipient_address: "0x3CD4958e76C317abcEA19faDd076348808424F99",
+ withdrawal_address: "0xE0C5ceA4D3869F156717C66E188Ae81C80914a6e",
+ },
+ ],
+});
+
+console.log(
+ `Direct the operators to https://goerli.launchpad.obol.tech/dv?configHash=${configHash} to complete the key generation process`
+);
+```
+
+## Invite the Operators to complete the DKG
+
+Once the Obol-API returns a `configHash` string from the `createClusterDefinition` method, you can use this identifier to invite the operators to the [Launchpad](../../../dvl/intro.md) to complete the process
+
+1. Operators navigate to `https://.launchpad.obol.tech/dv?configHash=` and complete the [run a DV with others](../group/quickstart-group-operator.md) flow.
+1. Once the DKG is complete, and operators are using the `--publish` flag, the created cluster details will be posted to the Obol API
+1. The creator will be able to retrieve this data with `obol.getClusterLock(configHash)`, to use for activating the newly created validator.
+
+## Retrieve the created Distributed Validators using the SDK
+
+Once the DKG is complete, the proposer of the cluster can retrieve key data such as the validator public keys and their associated deposit data messages.
+
+```js
+const clusterLock = await obol.getClusterLock(configHash);
+```
+
+Reference lock files can be found [here](https://github.com/ObolNetwork/charon/tree/main/cluster/testdata).
+
+## Activate the DVs using the deposit contract
+
+In order to activate the distributed validators, the cluster operator can retrieve the validators' associated deposit data from the lock file and use it to craft transactions to the `deposit()` method on the deposit contract.
+
+```js
+const validatorDepositData =
+ clusterLock.distributed_validators[validatorIndex].deposit_data;
+
+const depositContract = new ethers.Contract(
+ DEPOSIT_CONTRACT_ADDRESS, // 0x00000000219ab540356cBB839Cbe05303d7705Fa for Mainnet, 0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b for Goerli
+ depositContractABI, // https://etherscan.io/address/0x00000000219ab540356cBB839Cbe05303d7705Fa#code for Mainnet, and replace the address for Goerli
+ signer
+);
+
+const TX_VALUE = ethers.parseEther("32");
+
+const tx = await depositContract.deposit(
+ validatorDepositData.pubkey,
+ validatorDepositData.withdrawal_credentials,
+ validatorDepositData.signature,
+ validatorDepositData.deposit_data_root,
+ { value: TX_VALUE }
+);
+
+const txResult = await tx.wait();
+```
+
+## Usage Examples
+
+Examples of how our SDK can be used are found [here](https://github.com/ObolNetwork/obol-sdk-examples).
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-split.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-split.md
new file mode 100644
index 0000000000..5f9f3dd7b2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/quickstart-split.md
@@ -0,0 +1,89 @@
+---
+sidebar_position: 3
+description: Split existing validator keys
+---
+
+# Split existing validator private keys
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+
+This process should only be used if you want to split an _existing validator private key_ into multiple private key shares for use in a Distributed Validator Cluster. If your existing validator is not properly shut down before the Distributed Validator starts, your validator may be slashed.
+
+If you are starting a new validator, you should follow a [quickstart guide](../index.md) instead. :::
+
+Split an existing Ethereum validator key into multiple key shares for use in an [Obol Distributed Validator Cluster](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/key-concepts/README.md#distributed-validator-cluster).
+
+## Pre-requisites
+
+* Ensure you have the existing validator keystores (the ones to split) and passwords.
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Make sure `docker` is running before executing the commands below.
+
+## Step 1. Clone the charon repo and copy existing keystore files
+
+Clone the [charon](https://github.com/ObolNetwork/charon) repo.
+
+```sh
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon.git
+
+# Change directory
+cd charon/
+
+# Create a folder within this checked out repo
+mkdir split_keys
+```
+
+Copy the existing validator `keystore.json` files into this new folder. Alongside them, with a matching filename but ending with `.txt` should be the password to the keystore. E.g., `keystore-0.json` `keystore-0.txt`
+
+At the end of this process, you should have a tree like this:
+
+```shell
+├── split_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ ├── keystore-1.txt
+│ ...
+│ ├── keystore-*.json
+│ ├── keystore-*.txt
+```
+
+## Step 2. Split the keys using the charon docker command
+
+Run the following docker command to split the keys:
+
+```shell
+CHARON_VERSION= # E.g. v0.18.0
+CLUSTER_NAME= # The name of the cluster you want to create.
+WITHDRAWAL_ADDRESS= # The address you want to use for withdrawals.
+FEE_RECIPIENT_ADDRESS= # The address you want to use for fee payments.
+NODES= # The number of nodes in the cluster.
+
+docker run --rm -v $(pwd):/opt/charon obolnetwork/charon:${CHARON_VERSION} create cluster --name="${CLUSTER_NAME}" --withdrawal-addresses="${WITHDRAWAL_ADDRESS}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDRESS}" --split-existing-keys --split-keys-dir=/opt/charon/split_keys --nodes ${NODES} --network goerli
+```
+
+The above command will create `validator_keys` along with `cluster-lock.json` in `./.charon/cluster` for each node.
+
+Command output:
+
+```shell
+***************** WARNING: Splitting keys **********************
+ Please make sure any existing validator has been shut down for
+ at least 2 finalised epochs before starting the charon cluster,
+ otherwise slashing could occur.
+****************************************************************
+
+Created charon cluster:
+ --split-existing-keys=true
+
+.charon/cluster/
+├─ node[0-*]/ Directory for each node
+│ ├─ charon-enr-private-key Charon networking private key for node authentication
+│ ├─ cluster-lock.json Cluster lock defines the cluster lock file which is signed by all nodes
+│ ├─ validator_keys Validator keystores and password
+│ │ ├─ keystore-*.json Validator private share key for duty signing
+│ │ ├─ keystore-*.txt Keystore password files for keystore-*.json
+```
+
+These split keys can now be used to start a charon cluster.
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/self-relay.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/self-relay.md
new file mode 100644
index 0000000000..ae157214b7
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/advanced/self-relay.md
@@ -0,0 +1,36 @@
+---
+sidebar_position: 7
+description: Self-host a relay
+---
+
+# Self-Host a Relay
+
+If you are experiencing connectivity issues with the Obol hosted relays, or you want to improve your clusters latency and decentralization, you can opt to host your own relay on a separate open and static internet port.
+
+```
+# Figure out your public IP
+curl v4.ident.me
+
+# Clone the repo and cd into it.
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+cd charon-distributed-validator-node
+
+# Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname # Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname
+
+nano relay/docker-compose.yml
+
+docker compose -f relay/docker-compose.yml up
+```
+
+Test whether the relay is publicly accessible. This should return an ENR:
+`curl http://replace.with.public.ip.or.hostname:3640/enr`
+
+Ensure the ENR returned by the relay contains the correct public IP and port by decoding it with https://enr-viewer.com/.
+
+Configure **ALL** charon nodes in your cluster to use this relay:
+
+- Either by adding a flag: `--p2p-relays=http://replace.with.public.ip.or.hostname:3640/enr`
+- Or by setting the environment variable: `CHARON_P2P_RELAYS=http://replace.with.public.ip.or.hostname:3640/enr`
+
+Note that a local `relay/.charon/charon-enr-private-key` file will be created next to `relay/docker-compose.yml` to ensure a persisted relay ENR across restarts.
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/README.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/README.md
new file mode 100644
index 0000000000..f7eb065fd3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/README.md
@@ -0,0 +1,2 @@
+# alone
+
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/create-keys.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/create-keys.md
new file mode 100644
index 0000000000..883293d4bc
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/create-keys.md
@@ -0,0 +1,58 @@
+---
+sidebar_position: 2
+description: Run all nodes in a distributed validator cluster
+---
+
+# create-keys
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Create the private key shares
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+:::info Running a Distributed Validator alone means that a single operator manages all of the nodes of the DV. Depending on the operators security preferences, the private key shares can be created centrally, and distributed securely to each node. This is the focus of the below guide.
+
+Alternatively, the private key shares can be created in a lower-trust manner with a [Distributed Key Generation](../../key-concepts.md#distributed-validator-key-generation-ceremony) process, which avoids the validator private key being stored in full anywhere, at any point in its lifecycle. Follow the [group quickstart](../group/index.md) instead for this latter case. :::
+
+### Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Make sure `docker` is running before executing the commands below.
+
+### Create the key shares locally
+
+Create the artifacts needed to run a DV cluster by running the following command to setup the inputs for the DV. Check the [Charon CLI reference](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/charon/charon-cli-reference/README.md) for additional optional flags to set.\
+\
+
+
+```
+
+ WITHDRAWAL_ADDR=[ENTER YOUR WITHDRAWAL ADDRESS HERE]
+
+
+ FEE_RECIPIENT_ADDR=[ENTER YOUR FEE RECIPIENT ADDRESS HERE]
+
+
+ NB_NODES=[ENTER AMOUNT OF DESIRED NODES]
+
+
+ NETWORK="goerli"
+
+
+```
+
+Then, run this command to create all the key shares and cluster artifacts locally:\
+\
+
+
+```
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create cluster --name="Quickstart Cluster" --withdrawal-addresses="{'\$\{WITHDRAWAL_ADDR\}'}" --fee-recipient-addresses="{'\$\{FEE_RECIPIENT_ADDR\}'}" --nodes="{'\$\{NB_NODES\}'}" --network="{'\$\{NETWORK\}'}" --num-validators=1 --cluster-dir="cluster"
+
+```
+
+Go to the [Obol Goerli DV Launchpad](https://goerli.launchpad.obol.tech) and select `Create a distributed validator alone`. Follow the steps to configure your DV cluster.
+
+After successful completion, a subdirectory `cluster/` should be created. In it are as many folders as nodes of the cluster. Each folder contains charon artifacts and partial private keys needed for each node of the cluster.
+
+Once you have made a backup of the `cluster/` folder, you can move to [deploying this cluster physically](deploy.md).
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/deploy.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/deploy.md
new file mode 100644
index 0000000000..f53a530e4f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/deploy.md
@@ -0,0 +1,59 @@
+---
+sidebar_position: 3
+description: Move the private key shares to the nodes and run the cluster
+---
+
+# Deploy the cluster
+
+To distribute your cluster physically and start the DV, each node in the cluster needs one of the folders called `node*/` within the output of the `create cluster` command. These folders should be copied to a CDVN repo, and the folder renamed from `node0/` to `.charon/`. (Or you can override `charon run`'s default file locations)
+
+```log
+
+cluster
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ └── keystore-0.txt
+
+```
+
+```log
+└── .charon
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── ...
+ ├── keystore-N.json
+ └── keystore-N.txt
+```
+
+:point\_right: Use the single node [docker compose](https://github.com/ObolNetwork/charon-distributed-validator-node), the kubernetes [manifests](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node), or the [helm chart](https://github.com/ObolNetwork/helm-charts) example repos to get your nodes up and connected after loading the `.charon` folder artifacts into them appropriately.
+
+:::warning Right now, the `charon create cluster` command [used earlier to create the private keys](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/alone/create-keys/README.md) outputs a folder structure like `cluster/node*/`. Make sure to grab the `./node*/` folders, _rename_ them to `.charon` and then move them to one of the single node repos above. Once all nodes are online, synced, and connected, you will be ready to activate your validator. :::
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/test-locally.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/test-locally.md
new file mode 100644
index 0000000000..17e43c4e8d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/alone/test-locally.md
@@ -0,0 +1,81 @@
+---
+sidebar_position: 1
+description: Test the solo cluster locally
+---
+
+# Run a test cluster locally
+:::warning
+This is a demo repo to understand how Distributed Validators work and **is not suitable for a mainnet deployment**.
+
+This guide only runs one Execution Client, one Consensus Client, and 6 Distributed Validator Charon Client + Validator Client pairs on a single docker instance. As a consequence, if this machine fails, there will not be fault tolerance.
+
+Follow these two guides sequentially instead for production deployment: [create keys centrally](./create-keys.md) and [how to deploy them](./deploy.md).
+:::
+
+The [`charon-distributed-validator-cluster`](https://github.com/ObolNetwork/charon-distributed-validator-cluster) repo contains six charon clients in separate docker containers along with an execution client and consensus client, simulating a Distributed Validator cluster running.
+
+The default cluster consists of:
+- [Nethermind](https://github.com/NethermindEth/nethermind), an execution layer client
+- [Lighthouse](https://github.com/sigp/lighthouse), a consensus layer client
+- Six [charon](https://github.com/ObolNetwork/charon) nodes
+- A mixture of validator clients:
+ - VC0: [Lighthouse](https://github.com/sigp/lighthouse)
+ - vc1: [Teku](https://github.com/ConsenSys/teku)
+ - vc2: [Nimbus](https://github.com/status-im/nimbus-eth2)
+ - vc3: [Lighthouse](https://github.com/sigp/lighthouse)
+ - vc4: [Teku](https://github.com/ConsenSys/teku)
+ - vc5: [Nimbus](https://github.com/status-im/nimbus-eth2)
+
+## Pre-requisites
+
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Ensure you have [git](https://git-scm.com/downloads) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Create the key shares locally
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+ `.env.sample` is a sample environment file that allows overriding default configuration defined in `docker-compose.yml`. Uncomment and set any variable to override its value.
+
+3. Create the artifacts needed to run a DV cluster by running the following command:
+
+ ```sh
+ # Enter required validator addresses
+ WITHDRAWAL_ADDR=
+ FEE_RECIPIENT_ADDR=
+
+ # Create a distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create cluster --name="mycluster" --cluster-dir=".charon/cluster/" --withdrawal-addresses="${WITHDRAWAL_ADDR}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDR}" --nodes 6 --network goerli --num-validators=1
+ ```
+
+These commands will create six folders within `.charon/cluster`, one for each node created. You will need to rename `node*` to `.charon` for each folder to be found by the default `charon run` command, or you can use `charon run --private-key-file=".charon/cluster/node0/charon-enr-private-key" --lock-file=".charon/cluster/node0/cluster-lock.json"` for each instance of charon you start.
+
+## Start the cluster
+
+Run this command to start your cluster containers
+
+```sh
+# Start the distributed validator cluster
+docker compose up --build
+```
+Check the monitoring dashboard and see if things look all right
+
+```sh
+# Open Grafana
+open http://localhost:3000/d/laEp8vupp
+```
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/group/README.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/README.md
new file mode 100644
index 0000000000..56f83ad21c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/README.md
@@ -0,0 +1,2 @@
+# group
+
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/group/index.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/index.md
new file mode 100644
index 0000000000..aad97705e5
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/index.md
@@ -0,0 +1,12 @@
+# Run a cluster as a group
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+:::info Running a Distributed Validator with others typically means that several operators run the various nodes of the cluster. In such a case, the key shares should be created with a [distributed key generation process](../../key-concepts.md#distributed-validator-key-generation-ceremony), avoiding the private key being stored in full, anywhere. :::
+
+There are two sequential user journeys when setting up a DV cluster with others. Each comes with its own quickstart:
+
+1. The [**Creator** (**Leader**) Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/group/group/quickstart-group-leader-creator/README.md), which outlines the steps to propose a Distributed Validator Cluster.
+ * In the **Creator** case, the person creating the cluster _will NOT_ be a node operator in the cluster.
+ * In the **Leader** case, the person creating the cluster _will_ be a node operator in the cluster.
+2. The [**Operator** Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/group/group/quickstart-group-operator/README.md) which outlines the steps to create a Distributed Validator Cluster proposed by a leader or creator using the above process.
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-cli.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-cli.md
new file mode 100644
index 0000000000..033d5243ce
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-cli.md
@@ -0,0 +1,124 @@
+---
+sidebar_position: 3
+description: Run one node in a multi-operator distributed validator cluster using the CLI
+---
+
+# Using the CLI
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster via the CLI.
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+* Decide who the Leader or Creator of your cluster will be. Only them have to perform [step 2](quickstart-cli.md#step-2-leader-creates-the-dkg-configuration-file-and-distributes-it-to-everyone-else) and [step 5](quickstart-cli.md#step-5-activate-the-deposit-data) in this quickstart. They do not get any special privilege.
+ * In the **Leader** case, the operator creating the cluster will also operate a node in the cluster.
+ * In the **Creator** case, the cluster is created by an external party to the cluster.
+
+## Step 1. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, all operators (including the leader but NOT a creator) need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/faq/errors.mdx) for their charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+Finally, share your ENR with the leader or creator so that he/she can proceed to Step 2.
+
+## Step 2. Leader or Creator creates the DKG configuration file and distribute it to cluster operators
+
+1. The leader or creator of the cluster will prepare the `cluster-definition.json` file for the Distributed Key Generation ceremony using the `charon create dkg` command.
+
+```
+# Prepare an environment variable file
+cp .env.create_dkg.sample .env.create_dkg
+```
+
+2. Populate the `.env.create_dkg` file created with the `cluster name`, the `fee recipient` and `withdrawal Ethereum addresses`, and the `ENRs` of all the operators participating in the cluster.
+ * The file generated is hidden by default. To view it, run `ls -al` in your terminal. Else, if you are on `macOS`, press `Cmd + Shift + .` to view all hidden files in the finder application.
+3. Run the `charon create dkg` command that generates DKG cluster-definition.json file.
+
+```
+docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.18.0 create dkg
+```
+
+This command should output a file at `.charon/cluster-definition.json`. This file needs to be shared with the other operators in a cluster.
+
+## Step 3. Run the DKG
+
+After receiving the `cluster-definition.json` file created by the leader, cluster operators should ideally save it in the `.charon/` folder that was created during step 1, alternatively the `--definition-file` flag can override the default expected location for this file.
+
+Every cluster member then participates in the DKG ceremony. For Charon v1, this needs to happen relatively synchronously between participants at an agreed time.
+
+```
+# Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 dkg
+```
+
+> This is a helpful [video walkthrough](https://www.youtube.com/watch?v=94Pkovp5zoQ\&ab_channel=ObolNetwork).
+
+Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder. These include:
+
+* A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+* A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+* A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+:::warning Please make sure to create a backup of `.charon/validator_keys`. **If you lose your keys you won't be able to start the DV cluster successfully.** :::
+
+:::info The `cluster-lock` and `deposit-data` files are identical for each operator and can be copied if lost. :::
+
+## Step 4. Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (`geth`) and a consensus layer client (`lighthouse`).
+
+Before completing these instructions, you should assign a static local IP address to your device (extending the DHCP reservation indefinitely or removing the device from the DCHP pool entirely if you prefer), and port forward the TCP protocol on the public port `:3610` on your router to your device's local IP address on the same port. This step is different for every person's home internet, and can be complicated by the presence of dynamic public IP addresses. We are currently working on making this as easy as possible, but for the time being, a distributed validator cluster isn't going to work very resiliently if all charon nodes cannot talk directly to one another and instead need to have an intermediary node forwarding traffic to them.
+
+**Caution**: If you manually update `docker-compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
+
+**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.
+
+```
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up
+
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
+
+* That your charon client can connect to the configured beacon client.
+* That your charon client can connect to all peers.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually \~16 hours after the deposit is made).
+
+If at any point you need to turn off your node, you can run:
+
+```
+# Shut down the currently running distributed validator node
+docker compose down
+```
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-leader-creator.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-leader-creator.md
new file mode 100644
index 0000000000..a658ba536d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-leader-creator.md
@@ -0,0 +1,125 @@
+---
+sidebar_position: 1
+description: A leader/creator creates a cluster configuration to be shared with operators
+---
+
+# quickstart-group-leader-creator
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Creator & Leader Journey
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist with the preparation of a distributed validator key generation ceremony. Select the _Leader_ tab if you **will** be an operator participating in the cluster, and select the _Creator_ tab if you **will NOT** be an operator in the cluster.
+
+These roles hold no position of privilege in the cluster, they only set the initial terms of the cluster that the other operators agree to.
+
+The person creating the cluster will be a node operator in the cluster.\
+\
+
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+
+The person creating the cluster will not be a node operator in the cluster.
+
+### Overview Video
+
+### Step 1. Collect Ethereum addresses of the cluster operators
+
+Before starting the cluster creation, you will need to collect one Ethereum address per operator in the cluster. They will need to be able to sign messages through metamask with this address. Broader wallet support will be added in future.
+
+### Step 2. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, you need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/faq/errors.mdx#enrs-keys) for your charon client. Operators in your cluster will also need to do this step, as per their [quickstart](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-operator/README.md#step-2-create-and-back-up-a-private-key-for-charon). This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update docker permissions](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/faq/errors/README.md#docker-permission-denied-error) to allow the command to run successfully.
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+This step is not needed and you can move on to [Step 3](quickstart-group-leader-creator.md#step-3-create-the-dkg-configuration-file-and-distribute-it-to-cluster-operators).
+
+### Step 3. Create the DKG configuration file and distribute it to cluster operators
+
+You will prepare the configuration file for the distributed key generation ceremony using the launchpad.
+
+1. Go to the [DV Launchpad](https://goerli.launchpad.obol.tech)
+2. Connect your wallet
+
+
+
+3. Select `Create a Cluster with a group` then `Get Started`.
+
+
+
+4. Follow the flow and accept the advisories.
+5. Configure the Cluster
+
+ * Input the `Cluster Name` & `Cluster Size` (i.e. number of operators in the cluster). The threshold for the cluster to operate sucessfully will update automatically.
+ * ⚠️ Leave the `Non-Operator` toggle OFF.
+ * ⚠️ Turn the `Non-Operator` toggle ON.
+ * Input the Ethereum addresses for each operator collected during [step 1](quickstart-group-leader-creator.md#step-1-collect-ethereum-addresses-of-the-cluster-operators).
+ * Select the desired amount of validators (32 ETH each) the cluster will run.
+ * Paste your `ENR` generated at [Step 2](quickstart-group-leader-creator.md#step-2-create-and-back-up-a-private-key-for-charon).
+ * Select the `Withdrawal Addresses` method. Use `Single address` to receive the principal and fees to a single address or `Splitter Contracts` to share them among operators.
+ * Enter the `Withdrawal Address` that will receive the validator effective balance at exit and when balance skimming occurs.
+ * Enter the `Fee Recipient Address` to receive MEV rewards (if enabled), and block proposal priority fees.
+ * \
+
+
+ You can set them to be the same as your connected wallet address in one click.\
+ \
+
+
+ 
+
+ * Enter the Ethereum address to claim the validator principal (32 ether) at exit.
+ * Enter the Ethereum addresses and their percentage split of the validator's rewards. Validator rewards include consensus rewards, MEV rewards and proposal priority fees.
+
+ \
+
+
+ 
+
+ * Click `Create Cluster Configuration`
+
+* 6\. Review the cluster configuration
+* 6\. Deploy the Obol Splits contracts by signing the transaction with your wallet.
+* 7\. You will be asked to confirm your configuration and to sign:
+*
+ * The `config_hash`. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+ * The `operator_config_hash`. This is your acceptance of the terms as a participating node operator.
+ * Your `ENR`. Signing your ENR authorises the corresponding private key to act on your behalf in the cluster.
+* 7\. You will be asked to confirm your configuration and to sign:
+*
+ * The `config_hash`. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+
+8. Share your cluster invite link with the operators. Following the link will show you a screen waiting for other operators to accept the configuration you created.
+
+
+
+👉 Once every participating operator has signed their approval to the terms, you will continue the [**Operator** journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-operator/README.md#step-3-run-the-dkg) by completing the distributed key generation step.
+
+Your journey ends here and you can monitor with the link whether the operators confirm their agreement to the cluster by signing their approval. Future versions of the launchpad will allow a creator to track a distributed validator's lifecycle in its entirety.
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-operator.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-operator.md
new file mode 100644
index 0000000000..7e932abe1f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-operator.md
@@ -0,0 +1,133 @@
+---
+sidebar_position: 1
+description: A node operator joins a DV cluster
+---
+
+# Operator Journey
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster after receiving an cluster invite link from a leader or creator.
+
+## Overview Video
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+
+## Step 1. Share an Ethereum address with your Leader or Creator
+
+Before starting the cluster creation, make sure you have shared an Ethereum address with your cluster **Leader** or **Creator**. If you haven't chosen someone as a Leader or Creator yet, please go back to the [Quickstart intro](index.md) and define one person to go through the [Leader & Creator Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/group/quickstart-group-leader-creator/README.md) before moving forward.
+
+## Step 2. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, you need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/faq/errors.mdx#enrs-keys) for your charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.18.0 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update docker permissions](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/faq/errors/README.md#docker-permission-denied-error) to allow the command to run successfully.
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+## Step 3. Join and sign the cluster configuration
+
+After receiving the invite link created by the **Leader** or **Creator**, you will be able to join and sign the cluster configuration created.
+
+1. Go to the DV launchpad link provided by the leader or creator.
+2. Connect your wallet using the Ethereum address provided to the leader in [step 1](quickstart-group-operator.md#step-1-share-an-ethereum-address-with-your-leader-or-creator).
+
+
+
+3. Review the operators addresses submitted and click `Get Started` to continue.
+
+
+
+4. Review and accept the advisories.
+5. Review the configuration created by the leader or creator and add your `ENR` generated in [step 2](quickstart-group-operator.md#step-2-create-and-back-up-a-private-key-for-charon).
+
+
+
+6. Sign the following with your wallet
+ * The config hash. This is a hashed representation of all of the details for this cluster.
+ * Your own `ENR`. This signature authorises the key represented by this ENR to act on your behalf in the cluster.
+7. Wait for all the other operators in your cluster to do the same.
+
+## Step 4. Run the DKG
+
+:::info For the [DKG](../../../charon/dkg.md) to complete, all operators need to be running the command simultaneously. It helps to coordinate an agreed upon time amongst operators at which to run the command. :::
+
+### Overview
+
+1. Once all operators successfully signed, your screen will automatically advance to the next step and look like this. Click `Continue`. If you closed the tab, just go back to the invite link shared by the leader and connect your wallet.
+
+
+
+2. You have two options to perform the DKG.
+
+ 1. **Option 1** and default is to copy and run the `docker` command on the screen into your terminal. It will retrieve the remote cluster details and begin the DKG process.
+ 2. **Option 2** (Manual DKG) is to download the `cluster-definition` file manually and move it to the hidden `.charon` folder. Then, every cluster member participates in the DKG ceremony by running the command displayed.
+
+ 
+3. Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder. These include:
+ * A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+ * A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+ * A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+:::warning Please make sure to create a backup of `.charon/validator_keys`. **If you lose your keys you won't be able to start the DV cluster successfully.** :::
+
+:::info The `cluster-lock` and `deposit-data` files are identical for each operator and can be copied if lost. :::
+
+## Step 5. Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (`geth`) and a consensus layer client (`lighthouse`).
+
+Before completing these instructions, you should assign a static local IP address to your device (extending the DHCP reservation indefinitely or removing the device from the DCHP pool entirely if you prefer), and port forward the TCP protocol on the public port `:3610` on your router to your device's local IP address on the same port. This step is different for every person's home internet, and can be complicated by the presence of dynamic public IP addresses. We are currently working on making this as easy as possible, but for the time being, a distributed validator cluster isn't going to work very resiliently if all charon nodes cannot talk directly to one another and instead need to have an intermediary node forwarding traffic to them.
+
+**Caution**: If you manually update `docker compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
+
+**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.
+
+```
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up
+
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
+
+* That your charon client can connect to the configured beacon client.
+* That your charon client can connect to all peers.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually \~16 hours after the deposit is made).
+
+If at any point you need to turn off your node, you can run:
+
+```
+# Shut down the currently running distributed validator node
+docker compose down
+```
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/index.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/index.md
new file mode 100644
index 0000000000..0b3f009212
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/index.md
@@ -0,0 +1,8 @@
+# Quickstart Guides
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+There are two ways to set up a distributed validator and each comes with its own quickstart
+
+1. [Run a DV cluster as a **group**](group/index.md), where several operators run the nodes that make up the cluster. In this setup, the key shares are created using a distributed key generation process, avoiding the full private keys being stored in full in any one place. This approach can also be used by single operators looking to manage all nodes of a cluster but wanting to create the key shares in a trust-minimised fashion.
+2. [Run a DV cluster **alone**](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/quickstart/alone/create-keys/README.md), where a single operator runs all the nodes of the DV. Depending on trust assumptions, there is not necessarily the need to create the key shares via a DKG process. Instead the key shares can be created in a centralised manner, and distributed securely to the nodes.
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/quickstart-exit.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/quickstart-exit.md
new file mode 100644
index 0000000000..86b507ffd3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/quickstart-exit.md
@@ -0,0 +1,87 @@
+---
+sidebar_position: 6
+description: Exit a validator
+---
+
+# quickstart-exit
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Exit a DV
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas. In the case of a DV, each charon node needs to broadcast a partial exit to the other nodes of the cluster. Once a threshold of partial exits has been received by any node, the full voluntary exit will be sent to the beacon chain.
+
+This process will take 27 hours or longer depending on the current length of the exit queue.
+
+:::info
+
+* A threshold of operators needs to run the exit command for the exit to succeed.
+* If a charon client restarts after the exit command is run but before the threshold is reached, it will lose the partial exits it has received from the other nodes. If all charon clients restart and thus all partial exits are lost before the required threshold of exit messages are received, operators will have to rebroadcast their partial exit messages. :::
+
+### Run the `voluntary-exit` command on your validator client
+
+Run the appropriate command on your validator client to broadcast an exit message from your validator client to its upstream charon client.
+
+It needs to be the validator client that is connected to your charon client taking part in the DV, as you are only signing a partial exit message, with a partial private key share, which your charon client will combine with the other partial exit messages from the other operators.
+
+:::info
+
+* All operators need to use the same `EXIT_EPOCH` for the exit to be successful. Assuming you want to exit as soon as possible, the default epoch of `162304` included in the below commands should be sufficient.
+* Partial exits can be broadcasted by any validator client as long as the sum reaches the threshold for the cluster. :::
+
+```
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=162304`}
+
+
+```
+
+The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path `/home/user/data/charon` to the newly created `/home/user/data/wd` directory.\
+\
+For each file in the `/home/user/data/wd/secrets` directory, it:
+
+* Extracts the filename without the extension as the file name is the public key
+* Appends ``{String.raw`--validator=`}`` to the `command` variable.
+* Executes a program called `nimbus_beacon_node` with the following arguments:
+* `deposits exit` : Exits validators
+* `$command` : The generated command string from the loop.
+* `--epoch=162304` : The epoch upon which to submit the voluntary exit.
+* `--rest-url=http://charon:3600/` : Specifies the Charon `host:port`
+* `--data-dir=/home/user/charon/` : Specifies the `Keystore path` which has all the validator keys. There will be a `secrets` and a `validators` folder inside it.
+
+```
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+ mkdir /home/user/data/wd cp -r /home/user/data/charon/ /home/user/data/wd command=""; \ for file in /home/user/data/wd/secrets/*; do \ filename=$(basename "$file" | cut -d. -f1); \ command+=" --validator=$filename"; \ done; \
+/home/user/nimbus_beacon_node deposits exit $command --epoch=162304 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'}
node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments: --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.--data-dir=/opt/data
: Specifies the folder where the key stores were imported.--exitEpoch=162304
: The epoch upon which to submit the voluntary exit.--network=goerli
: Specifies the network.--yes
: Skips confirmation prompt. {String.rawdocker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=162304 --network=goerli --yes'`}
+
+```
+
+Once a threshold of exit signatures has been received by any single charon client, it will craft a valid voluntary exit message and will submit it to the beacon chain for inclusion. You can monitor partial exits stored by each node in the [Grafana Dashboard](https://github.com/ObolNetwork/charon-distributed-validator-node).
+
+### Exit epoch and withdrawable epoch
+
+The process of a validator exiting from staking takes variable amounts of time, depending on how many others are exiting at the same time.
+
+Immediately upon broadcasting a signed voluntary exit message, the exit epoch and withdrawable epoch values are calculated based off the current epoch number. These values determine exactly when the validator will no longer be required to be online performing validation, and when the validator is eligible for a full withdrawal respectively.
+
+1. Exit epoch - epoch at which your validator is no longer active, no longer earning rewards, and is no longer subject to slashing rules. :::warning Up until this epoch (while "in the queue") your validator is expected to be online and is held to the same slashing rules as always. Do not turn your DV node off until this epoch is reached. :::
+2. Withdrawable epoch - epoch at which your validator funds are eligible for a full withdrawal during the next validator sweep. This occurs 256 epochs after the exit epoch, which takes \~27.3 hours.
+
+### How to verify a validator exit
+
+Consult the examples below and compare them to your validator's monitoring to verify that exits from each operator in the cluster are being received. This example is a cluster of 4 nodes with 2 validators and threshold of 3 nodes broadcasting exits are needed.
+
+1. Operator 1 broadcasts an exit on validator client 1.  
+2. Operator 2 broadcasts an exit on validator client 2.  
+3. Operator 3 broadcasts an exit on validator client 3.  
+
+At this point, the threshold of 3 has been reached and the validator exit process will start. The logs will show the following: 
+
+:::tip Once a validator has broadcasted an exit message, it must continue to validate for at least 27 hours or longer. Do not shut off your distributed validator nodes until your validator is fully exited. :::
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/quickstart-mainnet.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/quickstart-mainnet.md
new file mode 100644
index 0000000000..f054fbab48
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/quickstart-mainnet.md
@@ -0,0 +1,103 @@
+---
+sidebar_position: 7
+description: Run a cluster on mainnet
+---
+
+# Run a DV on mainnet
+
+:::warning Charon is in a beta state, and **you should proceed only if you accept the risk, the** [**terms of use**](https://obol.tech/terms.pdf)**, and have tested running a Distributed Validator on a testnet first**.
+
+Distributed Validators created for goerli cannot be used on mainnet and vice versa, please take caution when creating, backing up, and activating mainnet validators. Incorrect usage may result in a loss of funds. :::
+
+This section is intended for users who wish to run their Distributed Validator on Ethereum mainnet.
+
+## Pre-requisites
+
+* You have [enough up-to-spec nodes](../key-concepts.md#distributed-validator-threshold) for your mainnet deployment.
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed on each node.
+* Ensure you have [git](https://git-scm.com/downloads) installed on each node.
+* Make sure `docker` is running before executing the commands below.
+
+## Steps
+
+### Using charon-distributed-validator-node in full
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) repo and `cd` into the directory.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+```
+
+2. If you have already cloned the repo previously, make sure that it is [up-to-date](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/int/quickstart/update/README.md).
+3. Copy the `.env.sample.mainnet` file to `.env`
+
+```
+cp -n .env.sample.mainnet .env
+```
+
+4. Run the docker compose file
+
+```
+docker compose up -d
+```
+
+Once your clients can connect and sync appropriately, your DV stack is now mainnet ready 🎉
+
+### Using a remote mainnet beacon node
+
+:::warning Using a remote beacon node will impact the performance of your Distributed Validator and should be used sparingly. :::
+
+If you already have a mainnet beacon node running somewhere and you want to use that instead of running EL (`geth`) & CL (`lighthouse`) as part of the repo, you can disable these images. To do so, follow these steps:
+
+1. Copy the `docker-compose.override.yml.sample` file
+
+```
+cp -n docker-compose.override.yml.sample docker-compose.override.yml
+```
+
+2. Uncomment the `profiles: [disable]` section for both `geth` and `lighthouse`. The override file should now look like this
+
+```
+services:
+ geth:
+ # Disable geth
+ profiles: [disable]
+ # Bind geth internal ports to host ports
+ #ports:
+ #- 8545:8545 # JSON-RPC
+ #- 8551:8551 # AUTH-RPC
+ #- 6060:6060 # Metrics
+
+ lighthouse:
+ # Disable lighthouse
+ profiles: [disable]
+ # Bind lighthouse internal ports to host ports
+ #ports:
+ #- 5052:5052 # HTTP
+ #- 5054:5054 # Metrics
+...
+```
+
+3. Then, uncomment and set the `CHARON_BEACON_NODE_ENDPOINTS` variable in the `.env` file to your mainnet beacon node's URL
+
+```
+...
+# Connect to one or more external beacon nodes. Use a comma separated list excluding spaces.
+CHARON_BEACON_NODE_ENDPOINTS=
+...
+```
+
+4. Restart your docker compose
+
+```
+docker compose down
+docker compose up -d
+```
+
+### Exit a mainnet Distributed Validator
+
+If you want to exit your mainnet validator, refer to our [exit guide](quickstart-exit.md).
diff --git a/docs/versioned_docs/version-v0.18.0/int/quickstart/update.md b/docs/versioned_docs/version-v0.18.0/int/quickstart/update.md
new file mode 100644
index 0000000000..e6ca215bec
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/int/quickstart/update.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 5
+description: Update your DV cluster with the latest Charon release
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Update a DV
+
+It is highly recommended to upgrade your DV stack from time to time. This ensures that your node is secure, performant, up-to-date and you don't miss important hard forks.
+
+To do this, follow these steps:
+
+### Navigate to the node directory
+
+
+
+
+
+ cd charon-distributed-validator-node
+
+
+
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+
+
+
+### Pull latest changes to the repo
+```
+git pull
+```
+
+### Create (or recreate) your DV stack
+```
+docker compose up -d --build
+```
+:::warning
+If you run more than one node in a DV Cluster, please take caution upgrading them simultaneously. Particularly if you are updating or changing the validator client used or recreating disks. It is recommended to update nodes on a sequential basis to minimse liveness and safety risks.
+:::
+
+### Conflicts
+
+:::info
+You may get a `git conflict` error similar to this:
+:::
+```markdown
+error: Your local changes to the following files would be overwritten by merge:
+prometheus/prometheus.yml
+...
+Please commit your changes or stash them before you merge.
+```
+This is probably because you have made some changes to some of the files, for example to the `prometheus/prometheus.yml` file.
+
+To resolve this error, you can either:
+
+- Stash and reapply changes if you want to keep your custom changes:
+ ```
+ git stash # Stash your local changes
+ git pull # Pull the latest changes
+ git stash apply # Reapply your changes from the stash
+ ```
+ After reapplying your changes, manually resolve any conflicts that may arise between your changes and the pulled changes using a text editor or Git's conflict resolution tools.
+
+- Override changes and recreate configuration if you don't need to preserve your local changes and want to discard them entirely:
+ ```
+ git reset --hard # Discard all local changes and override with the pulled changes
+ docker-compose up -d --build # Recreate your DV stack
+ ```
+ After overriding the changes, you will need to recreate your DV stack using the updated files.
+ By following one of these approaches, you should be able to handle Git conflicts when pulling the latest changes to your repository, either preserving your changes or overriding them as per your requirements.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.18.0/intro.md b/docs/versioned_docs/version-v0.18.0/intro.md
new file mode 100644
index 0000000000..10a81b9143
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 20 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.18.0/sc/README.md b/docs/versioned_docs/version-v0.18.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.18.0/sc/introducing-obol-splits.md b/docs/versioned_docs/version-v0.18.0/sc/introducing-obol-splits.md
new file mode 100644
index 0000000000..fb642befa5
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sc/introducing-obol-splits.md
@@ -0,0 +1,89 @@
+---
+sidebar_position: 1
+description: Smart contracts for managing Distributed Validators
+---
+
+# Obol Splits
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators. These contracts include:
+
+- Withdrawal Recipients: Contracts used for a validator's withdrawal address.
+- Split contracts: Contracts to split ether across multiple entities. Developed by [Splits.org](https://splits.org)
+- Split controllers: Contracts that can mutate a splitter's configuration.
+
+Two key goals of validator reward management are:
+
+1. To be able to differentiate reward ether from principal ether such that node operators can be paid a percentage the _reward_ they accrue for the principal provider rather than a percentage of _principal+reward_.
+2. To be able to withdraw the rewards in an ongoing manner without exiting the validator.
+
+Without access to the consensus layer state in the EVM to check a validator's status or balance, and due to the incoming ether being from an irregular state transition, neither of these requirements are easily satisfiable.
+
+The following sections outline different contracts that can be composed to form a solution for one or both goals.
+
+## Withdrawal Recipients
+
+Validators have two streams of revenue, the consensus layer rewards and the execution layer rewards. Withdrawal Recipients focus on the former, receiving the balance skimming from a validator with >32 ether in an ongoing manner, and receiving the principal of the validator upon exit.
+
+### Optimistic Withdrawal Recipient
+
+This is the primary withdrawal recipient Obol uses, as it allows for the separation of reward from principal, as well as permitting the ongoing withdrawal of accruing rewards.
+
+An Optimistic Withdrawal Recipient [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipient.sol) takes three inputs when deployed:
+
+- A _principal_ address: The address that controls where the principal ether will be transferred post-exit.
+- A _reward_ address: The address where the accruing reward ether is transferred to.
+- The amount of ether that makes up the principal.
+
+This contract **assumes that any ether that has appeared in it's address since it was last able to do balance accounting is skimming reward from an ongoing validator** (or number of validators) unless the change is > 16 ether. This means balance skimming is immediately claimable as reward, while an inflow of e.g. 31 ether is tracked as a return of principal (despite being slashed in this example).
+
+:::warning
+
+Worst-case mass slashings can theoretically exceed 16 ether, if this were to occur, the returned principal would be misclassified as a reward, and distributed to the wrong address. This risk is the drawback that makes this contract variant 'optimistic'. If you intend to use this contract type, **it is important you understand and accept this risk**, however minute.
+
+The alternative is to use an splits.org [waterfall contract](https://docs.splits.org/core/waterfall), which won't allow the claiming of rewards until all principal ether has been returned, meaning validators need to be exited for operators to claim their CL rewards.
+
+:::
+
+This contract fits both design goals and can be used with thousands of validators. If you deploy an Optimistic Withdrawal Recipient with a principal higher than you actually end up using, nothing goes wrong. If you activate more validators than you specified in your contract deployment, you will record too much ether as reward and will overpay your reward address with ether that was principal ether, not earned ether. Current iterations of this contract are not designed for editing the amount of principal set.
+
+#### OWR Factory Deployment
+
+The OptimisticWithdrawalRecipient contract is deployed via a [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipientFactory.sol). The factory is deployed at the following addresses on the following chains.
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | [0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522](https://etherscan.io/address/0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522) |
+| Goerli | [0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26](https://goerli.etherscan.io/address/0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26) |
+| Holesky | |
+| Sepolia | [0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a](https://sepolia.etherscan.io/address/0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a) |
+
+### Exitable Withdrawal Recipient
+
+A much awaited feature for proof of stake Ethereum is the ability to trigger the exit of a validator with only the withdrawal address. This is tracked in [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002). Support for this feature will be inheritable in all other withdrawal recipient contracts. This will mitigate the risk to a principal provider of funds being stuck, or a validator being irrecoverably offline.
+
+## Split Contracts
+
+A split, or splitter, is a set of contracts that can divide ether or an ERC20 across a number of addresses. Splits are often used in conjunction with withdrawal recipients. Execution Layer rewards for a DV are directed to a split address through the use of a `fee recipient` address. Splits can be either immutable, or mutable by way of an admin address capable of updating them.
+
+Further information about splits can be found on the splits.org team's [docs site](https://docs.splits.org/). The addresses of their deployments can be found [here](https://docs.splits.org/core/split#addresses).
+
+## Split Controllers
+
+Splits can be completely edited through the use of the `controller` address, however, total editability of a split is not always wanted. A permissive controller and a restrictive controller are given as examples below.
+
+### (Gnosis) SAFE wallet
+
+A [SAFE](https://safe.global/) is a common method to administrate a mutable split. The most well-known deployment of this pattern is the [protocol guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html). The SAFE can arbitrarily update the split to any set of addresses with any valid set of percentages.
+
+### Immutable Split Controller
+
+This is a [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitController.sol) that updates one split configuration with another, exactly once. Only a permissioned address can trigger the change. This contract is suitable for changing a split at an unknown point in future to a configuration pre-defined at deployment.
+
+The Immutable Split Controller [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitControllerFactory.sol) can be found at the following addresses:
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | |
+| Goerli | [0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f](https://goerli.etherscan.io/address/0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f) |
+| Holesky | |
+| Sepolia | |
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.18.0/sec/README.md b/docs/versioned_docs/version-v0.18.0/sec/README.md
new file mode 100644
index 0000000000..aeb3b02cce
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sec/README.md
@@ -0,0 +1,2 @@
+# sec
+
diff --git a/docs/versioned_docs/version-v0.18.0/sec/bug-bounty.md b/docs/versioned_docs/version-v0.18.0/sec/bug-bounty.md
new file mode 100644
index 0000000000..48c52d89b4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sec/bug-bounty.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 2
+description: Bug Bounty Policy
+---
+
+# Obol Bug Bounty
+
+## Overview
+
+Obol Labs is committed to ensuring the security of our distributed validator software and services. As part of our commitment to security, we have established a bug bounty program to encourage security researchers to report vulnerabilities in our software and services to us so that we can quickly address them.
+
+## Eligibility
+
+To participate in the Bug Bounty Program you must:
+
+- Not be a resident of any country that does not allow participation in these types of programs
+- Be at least 14 years old and have legal capacity to agree to these terms and participate in the Bug Bounty Program
+- Have permission from your employer to participate
+- Not be (for the previous 12 months) an Obol Labs employee, immediate family member of an Obol employee, Obol contractor, or Obol service provider.
+
+## Scope
+
+The bug bounty program applies to software and services that are built by Obol. Only submissions under the following domains are eligible for rewards:
+
+- Charon DVT Middleware
+- DV Launchpad
+- Obol’s Public API
+- Obol’s Smart Contracts and the contracts they depend on.
+- Obol’s Public Relay
+
+Additionally, all vulnerabilities that require or are related to the following are out of scope:
+
+- Social engineering
+- Rate Limiting (Non-critical issues)
+- Physical security
+- Non-security-impacting UX issues
+- Vulnerabilities or weaknesses in third party applications that integrate with Obol
+- The Obol website or the Obol infrastructure in general is NOT part of this bug bounty program.
+
+## Rules
+
+- Bug has not been publicly disclosed
+- Vulnerabilities that have been previously submitted by another contributor or already known by the Obol development team are not eligible for rewards
+- The size of the bounty payout depends on the assessment of the severity of the exploit. Please refer to the rewards section below for additional details
+- Bugs must be reproducible in order for us to verify the vulnerability. Submissions with a working proof of concept is necessary
+- Rewards and the validity of bugs are determined by the Obol security team and any payouts are made at their sole discretion
+- Terms and conditions of the Bug Bounty program can be changed at any time at the discretion of Obol
+- Details of any valid bugs may be shared with complementary protocols utilised in the Obol ecosystem in order to promote ecosystem cohesion and safety.
+
+## Rewards
+
+The rewards for participating in our bug bounty program will be based on the severity and impact of the vulnerability discovered. We will evaluate each submission on a case-by-case basis, and the rewards will be at Obol’s sole discretion.
+
+### Low: up to $500
+
+A Low-level vulnerability is one that has a limited impact and can be easily fixed. Unlikely to have a meaningful impact on availability, integrity, and/or loss of funds.
+
+- Low impact, medium likelihood
+- Medium impact, low likelihood
+ Examples:
+- Attacker can sometimes put a charon node in a state that causes it to drop one out of every one hundred attestations made by a validator
+
+### Medium: up to $1,000
+
+A Medium-level vulnerability is one that has a moderate impact and requires a more significant effort to fix. Possible to have an impact on validator availability, integrity, and/or loss of funds.
+
+- High impact, low likelihood
+- Medium impact, medium likelihood
+- Low impact, high likelihood
+ Examples:
+- Attacker can successfully conduct eclipse attacks on the cluster nodes with peer-ids with 4 leading zero bytes.
+
+### High: up to $4,000
+
+A High-level vulnerability is one that has a significant impact on the security of the system and requires a significant effort to fix. Likely to have impact on availability, integrity, and/or loss of funds.
+
+- High impact, medium likelihood
+- Medium impact, high likelihood
+ Examples:
+- Attacker can successfully partition the cluster and keep the cluster offline.
+
+### Critical: up to $10,000
+
+A Critical-level vulnerability is one that has a severe impact on the security of the in-production system and requires immediate attention to fix. Highly likely to have a material impact on availability, integrity, and/or loss of funds.
+
+- High impact, high likelihood
+ Examples:
+- Attacker can successfully conduct remote code execution in charon client to exfiltrate BLS private key material.
+
+We may offer rewards in the form of cash, merchandise, or recognition. We will only award one reward per vulnerability discovered, and we reserve the right to deny a reward if we determine that the researcher has violated the terms and conditions of this policy.
+
+## Submission process
+
+Please email security@obol.tech
+
+Your report should include the following information:
+
+- Description of the vulnerability and its potential impact
+- Steps to reproduce the vulnerability
+- Proof of concept code, screenshots, or other supporting documentation
+- Your name, email address, and any contact information you would like to provide.
+ Reports that do not include sufficient detail will not be eligible for rewards.
+
+## Disclosure Policy
+
+Obol Labs will disclose the details of the vulnerability and the researcher’s identity (with their consent) only after we have remediated the vulnerability and issued a fix. Researchers must keep the details of the vulnerability confidential until Obol Labs has acknowledged and remediated the issue.
+
+## Legal Compliance
+
+All participants in the bug bounty program must comply with all applicable laws, regulations, and policy terms and conditions. Obol will not be held liable for any unlawful or unauthorised activities performed by participants in the bug bounty program.
+
+We will not take any legal action against security researchers who discover and report security vulnerabilities in accordance with this bug bounty policy. We do, however, reserve the right to take legal action against anyone who violates the terms and conditions of this policy.
+
+## Non-Disclosure Agreement
+
+All participants in the bug bounty program will be required to sign a non-disclosure agreement (NDA) before they are given access to closed source software and services for testing purposes.
diff --git a/docs/versioned_docs/version-v0.18.0/sec/contact.md b/docs/versioned_docs/version-v0.18.0/sec/contact.md
new file mode 100644
index 0000000000..e66e1663e2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sec/contact.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 3
+description: Security details for the Obol Network
+---
+
+# Contacts
+
+Please email security@obol.tech to report a security incident, vulnerability, bug or inquire about Obol's security.
+
+Also, visit the [obol security repo](https://github.com/ObolNetwork/obol-security) for more details.
diff --git a/docs/versioned_docs/version-v0.18.0/sec/ev-assessment.md b/docs/versioned_docs/version-v0.18.0/sec/ev-assessment.md
new file mode 100644
index 0000000000..a8ce756359
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sec/ev-assessment.md
@@ -0,0 +1,295 @@
+---
+sidebar_position: 4
+description: Software Development Security Assessment
+---
+
+# ev-assessment
+
+## Software Development at Obol
+
+When hardening a projects technical security, team member's operational security, and the security of the software development practices in use by the team are some of the most criticial areas to secure. Many hacks and compromises in the space to date have been a result of these attack vectors rather than exploits of the software itself.
+
+With this in mind, in January 2023 the Obol team retained the expertise of Ethereal Venture's security researcher Alex Wade; to interview key stakeholders and produce a report into the teams Software Development Lifecycle.
+
+The below page is a result of the report that was produced. What is present here has had some sensitive information redacted, and contains responses to the recommendations made, detailing the actions the Obol team have taken to mitigate what has been highlighted.
+
+## Obol Report
+
+**Prepared by: Alex Wade (Ethereal Ventures)** **Date: Jan 2023**
+
+Over the past month, I worked with Obol to review their software development practices in preparation for their upcoming security audits. My goals were to review and analyze:
+
+* Software development processes
+* Vulnerability disclosure and escalation procedures
+* Key personnel risk
+
+The information in this report was collected through a series of interviews with Obol’s project leads.
+
+### Contents:
+
+* Background Info
+* Analysis - Cluster Setup and DKG
+ * Key Risks
+ * Potential Attack Scenarios
+* Recommendations
+ * R1: Users should deploy cluster contracts through a known on-chain entry point
+ * R2: Users should deposit to the beacon chain through a pool contract
+ * R3: Raise the barrier to entry to push an update to the Launchpad
+* Additional Notes
+ * Vulnerability Disclosure
+ * Key Personnel Risk
+
+### Background Info
+
+**Each team lead was asked to describe Obol in terms of its goals, objectives, and key features.**
+
+#### What is Obol?
+
+Obol builds DVT (Distributed Validator Technology) for Ethereum.
+
+#### What is Obol’s goal?
+
+Obol’s goal is to solve a classic distributed systems problem: uptime.
+
+Rather than requiring Ethereum validators to stake on their own, Obol allows groups of operators to stake together. Using Obol, a single validator can be run cooperatively by multiple people across multiple machines.
+
+In theory, this architecture provides validators with some redundancy against common issues: server and power outages, client failures, and more.
+
+#### What are Obol’s objectives?
+
+Obol’s business objective is to provide base-layer infrastructure to support a distributed validator ecosystem. As Obol provides base layer technology, other companies and projects will build on top of Obol.
+
+Obol’s business model is to eventually capture a portion of the revenue generated by validators that use Obol infrastructure.
+
+#### What is Obol’s product?
+
+Obol’s product consists of three main components, each run by its own team: a webapp, a client, and smart contracts.
+
+* [DV Launchpad](../dvl/intro.md): A webapp to create and manage distributed validators.
+* [Charon](../charon/intro.md): A middleware client that enables operators to run distributed validators.
+* [Solidity](../sc/introducing-obol-splits.md): Withdrawal and fee recipient contracts for use with distributed validators.
+
+### Analysis - Cluster Setup and DKG
+
+The Launchpad guides users through the process of creating a cluster, which defines important parameters like the validator’s fee recipient and withdrawal addresses, as well as the identities of the operators in the cluster. In order to ensure their cluster configuration is correct, users need to rely on a few different factors.
+
+**First, users need to trust the Charon client** to perform the DKG correctly, and validate things like:
+
+* Config file is well-formed and is using the expected version
+* Signatures and ENRs from other operators are valid
+* Cluster config hash is correct
+* DKG succeeds in producing valid signatures
+* Deposit data is well-formed and is correctly generated from the cluster config and DKG.
+
+However, Charon’s validation is limited to the digital: signature checks, cluster file syntax, etc. It does NOT help would-be operators determine whether the other operators listed in their cluster definition are the real people with whom they intend to start a DVT cluster. So -
+
+**Second, users need to come to social consensus with fellow operators.** While the cluster is being set up, it’s important that each operator is an active participant. Each member of the group must validate and confirm that:
+
+* the cluster file correctly reflects their address and node identity, and reflects the information they received from fellow operators
+* the cluster parameters are expected – namely, the number of validators and signing threshold
+
+**Finally, users need to perform independent validation.** Each user should perform their own validation of the cluster definition:
+
+* Is my information correct? (address and ENR)
+* Does the information I received from the group match the cluster definition?
+* Is the ETH2 deposit data correct, and does it match the information in the cluster definition?
+* Are the withdrawal and fee recipient addresses correct?
+
+These final steps are potentially the most difficult, and may require significant technical knowledge.
+
+### Key Risks
+
+#### 1. Validation of Contract Deployment and Deposit Data Relies Heavily on Launchpad
+
+From my interviews, it seems that the user deploys both the withdrawal and fee recipient contracts through the Launchpad.
+
+What I’m picturing is that during the first parts of the cluster setup process, the user is prompted to sign one or more transactions deploying the withdrawal and fee recipient contracts to mainnet. The Launchpad apparently uses an npm package to deploy these contracts: `0xsplits/splits-sdk`, which I assume provides either JSON artifacts or a factory address on chain. The Launchpad then places the deployed contracts into the cluster config file, and the process moves on.
+
+If an attacker has published a malicious update to the Launchpad (or compromised an underlying dependency), the contracts deployed by the Launchpad may be malicious. The questions I’d like to pose are:
+
+* How does the group creator know the Launchpad deployed the correct contracts?
+* How does the rest of the group know the creator deployed the contracts through the Launchpad?
+
+My understanding is that this ultimately comes down to the independent verification that each of the group’s members performs during and after the cluster’s setup phase.
+
+At its worst, this verification might consist solely of the cluster creator confirming to the others that, yes, those addresses match the contracts I deployed through the Launchpad.
+
+A more sophisticated user might verify that not only do the addresses match, but the deployed source code looks roughly correct. However, this step is far out of the realm of many would-be validators. To be really certain that the source code is correct would require auditor-level knowledge.
+
+The risk is that:
+
+* the deployed contracts are NOT the correctly-configured 0xsplits waterfall/fee splitter contracts
+* most users are ill-equipped to make this determination themselves
+* we don’t want to trust the Launchpad as the single source of truth
+
+In the worst case, the cluster may end up depositing with malicious withdrawal or fee recipient credentials. If unnoticed, this may net an attacker the entire withdrawal amount, once the cluster exits.
+
+Note that the same (or similar) risks apply to validation of deposit data, which has the potential to be similarly difficult. I’m a little fuzzy on which part of the Obol stack actually generates the deposit data / deposit transaction, so I can’t speak to this as much. However, I think the mitigation for both of these is roughly the same - read on!
+
+**Mitigation:**
+
+It’s certainly a good idea to make it harder to deploy malicious updates to the Launchpad, but this may not be entirely possible. A higher-yield strategy may be to educate and empower users to perform independent validation of the DVT setup process - without relying on information fed to them by Charon and the Launchpad.
+
+I’ve outlined some ideas for this in #R1 and #R2.
+
+#### 2. Social Consensus, aka “Who sends the 32 ETH?”
+
+Depositing to the beacon chain requires a total of 32 ETH. Obol’s product allows multiple operators to act as a single validator together, which means would-be operators need to agree on how to fund the 32 ETH needed to initiate the deposit.
+
+It is my understanding that currently, this process comes down to trust and loose social consensus. Essentially, the group needs to decide who chips in what amount together, and then trust someone to take the 32 ETH and complete the deposit process correctly (without running away with the money).
+
+Granted, the initial launch of Obol will be open only to a small group of people as the kinks in the system get worked out - but in preparation for an eventual public release, the deposit process needs to be much simpler and far less reliant on trust.
+
+Mitigation: See #R2.
+
+**Potential Attack Scenarios**
+
+During the interview process, I learned that each of Obol’s core components has its own GitHub repo, and that each repo has roughly the same structure in terms of organization and security policies. For each repository:
+
+* There are two overall github organization administrators, and a number of people have administrative control over individual repositories.
+* In order to merge PRs, the submitter needs:
+ * CI/CD checks to pass
+ * Review from one person (anyone at Obol)
+
+Of course, admin access also means the ability to change these settings - so repo admins could theoretically merge PRs without needing checks to pass, and without review/approval, organization admins can control the full GitHub organization.
+
+The following scenarios describe the impact an attack may have.
+
+**1. Publishing a malicious version of the Launchpad, or compromising an underlying dependency**
+
+* Reward: High
+* Difficulty: Medium-Low
+
+As described in Key Risks, publishing a malicious version of the Launchpad has the potential to net the largest payout for an attacker. By tampering with the cluster’s deposit data or withdrawal/fee recipient contracts, an attacker stands to gain 32 ETH or more per compromised cluster.
+
+During the interviews, I learned that merging PRs to main in the Launchpad repo triggers an action that publishes to the site. Given that merges can be performed by an authorized Obol developer, this makes the developers prime targets for social engineering attacks.
+
+Additionally, the use of the `0xsplits/splits-sdk` NPM package to aid in contract deployment may represent a supply chain attack vector. It may be that this applies to other Launchpad dependencies as well.
+
+In any case, with a fairly large surface area and high potential reward, this scenario represents a credible risk to users during the cluster setup and DKG process.
+
+See #R1, #R2, and #R3 for some ideas to address this scenario.
+
+**2. Publishing a malicious version of Charon to new operators**
+
+* Reward: Medium
+* Difficulty: High
+
+During the cluster setup process, Charon is responsible both for validating the cluster configuration produced by the Launchpad, as well as performing a DKG ceremony between a group’s operators.
+
+If new operators use a malicious version of Charon to perform this process, it may be possible to tamper with both of these responsibilities, or even get access to part or all of the underlying validator private key created during DKG.
+
+However, the difficulty of this type of attack seems quite high. An attacker would first need to carry out the same type of social engineering attack described in scenario 1 to publish and tag a new version of Charon. Crucially, users would also need to install the malicious version - unlike the Launchpad, an update here is not pushed directly to users.
+
+As long as Obol is clear and consistent with communication around releases and versioning, it seems unlikely that a user would both install a brand-new, unannounced release, and finish the cluster setup process before being warned about the attack.
+
+**3. Publishing a malicious version of Charon to existing validators**
+
+* Reward: Low
+* Difficulty: High
+
+Once a distributed validator is up and running, much of the danger has passed. As a middleware client, Charon sits between a validator’s consensus and validator clients. As such, it shouldn’t have direct access to a validator’s withdrawal keys nor signing keys.
+
+If existing validators update to a malicious version of Charon, it’s likely the worst thing an attacker could theoretically do is slash the validator, however, assuming charon has no access to any private keys, this would be predicated on one or more validator clients connected to charon also failing to prevent the signing of a slashable message. In practice, a compromised charon client is more likely to pose liveness risks than safety risks.
+
+This is not likely to be particularly motivating to potential attackers - and paired with the high difficulty described above, this scenario seems unlikely to cause significant issues.
+
+### Recommendations
+
+#### R1: Users should deploy cluster contracts through a known on-chain entry point
+
+During setup, users should only sign one transaction via the Launchpad - to a contract located at an Obol-held ENS (e.g. `launchpad.obol.eth`). This contract should deploy everything needed for the cluster to operate, like the withdrawal and fee recipient contracts. It should also initialize them with the provided reward split configuration (and any other config needed).
+
+Rather than using an NPM library to supply a factory address or JSON artifacts, this has the benefit of being both:
+
+* **Harder to compromise:** as long as the user knows launchpad.obol.eth, it’s pretty difficult to trick them into deploying the wrong contracts.
+* **Easier to validate** for non-technical users: the Obol contract can be queried for deployment information via etherscan. For example:
+
+
+
+Note that in order for this to be successful, Obol needs to provide detailed steps for users to perform manual validation of their cluster setups. Users should be able to treat this as a “checklist:”
+
+* Did I send a transaction to `launchpad.obol.eth`?
+* Can I use the ENS name to locate and query the deployment manager contract on etherscan?
+* If I input my address, does etherscan report the configuration I was expecting?
+ * withdrawal address matches
+ * fee recipient address matches
+ * reward split configuration matches
+
+As long as these steps are plastered all over the place (i.e. not just on the Launchpad) and Obol puts in effort to educate users about the process, this approach should allow users to validate cluster configurations themselves - regardless of Launchpad or NPM package compromise.
+
+**Obol’s response:**
+
+Roadmapped: add the ability for the OWR factory to claim and transfer its reverse resolution ownership.
+
+#### R2: Users should deposit to the beacon chain through a pool contract
+
+Once cluster setup and DKG is complete, a group of operators should deposit to the beacon chain by way of a pool contract. The pool contract should:
+
+* Accept Eth from any of the group’s operators
+* Stop accepting Eth when the contract’s balance hits (32 ETH \* number of validators)
+* Make it easy to pull the trigger and deposit to the beacon chain once the critical balance has been reached
+* Offer all of the group’s operators a “bail” option at any point before the deposit is triggered
+
+Ideally, this contract is deployed during the setup process described in #R1, as another step toward allowing users to perform independent validation of the process.
+
+Rather than relying on social consensus, this should:
+
+* Allow operators to fund the validator without needing to trust any single party
+* Make it harder to mess up the deposit or send funds to some malicious actor, as the pool contract should know what the beacon deposit contract address is
+
+**Obol’s response:**
+
+Roadmapped: give the operators a streamlined, secure way to deposit Ether (ETH) to the beacon chain collectively, satisfying specific conditions:
+
+* Pooling from multiple operators.
+* Ceasing to accept ETH once a critical balance is reached, defined by 32 ETH multiplied by the number of validators.
+* Facilitating an immediate deposit to the beacon chain once the target balance is reached.
+* Provide a 'bail-out' option for operators to withdraw their contribution before initiating the group's deposit to the beacon chain.
+
+#### R3: Raise the barrier to entry to push an update to the Launchpad
+
+Currently, any repo admin can publish an update to the Launchpad unchecked.
+
+Given the risks and scenarios outlined above, consider amending this process so that the sole compromise of either admin is not sufficient to publish to the Launchpad site. It may be worthwhile to require both admins to approve publishing to the site.
+
+Along with simply adding additional prerequisites to publish an update to the Launchpad, ensure that both admins have enabled some level of multi-factor authentication on their GitHub accounts.
+
+**Obol’s response:**
+
+We removed individual’s ability to merge changes without review, enforced MFA, signed commits, and employed Bulldozer bot to make sure a PR gets merged automatically when all checks pass.
+
+### Additional Notes
+
+#### Vulnerability Disclosure
+
+During the interviews, I got some conflicting information when asking about Obol’s vulnerability disclosure process.
+
+Some interviewees directed me towards Obol’s security repo, which details security contacts: [ObolNetwork/obol-security](https://github.com/ObolNetwork/obol-security), while some answered that disclosure should happen primarily through Immunefi. While these may both be part of the correct answer, it seems that Obol’s disclosure process may not be as well-defined as it could be. Here are some notes:
+
+* I wasn’t able to find information about Obol on Immunefi. I also didn’t find any reference to a security contact or disclosure policy in Obol’s docs.
+* When looking into the obol security repo, I noticed broken links in a few of the sections in README.md and SECURITY.md:
+ * Security policy
+ * More Information
+* Some of the text and links in the Bug Bounty Program don’t seem to apply to Obol (see text referring to Vaults and Strategies).
+* The Receiving Disclosures section does not include a public key with which submitters can encrypt vulnerability information.
+
+It’s my understanding that these items are probably lower priority due to Obol’s initial closed launch - but these should be squared away soon! \[Obol response to latest vuln disclosure process goes here]
+
+**Obol’s response:**
+
+we addressed all of the concerns in the obol-security repository:
+
+1. The security policy link has been fixed
+2. The Bug Bounty program received an overhaul and clearly states rewards, eligibility, and scope
+3. We list two GPG public keys for which we accept encrypted vulnerabilities reports.
+
+We are actively working towards integrating Immunefi in our security pipeline.
+
+#### Key Personnel Risk
+
+A final section on the specifics of key personnel risk faced by Obol has been redacted from the original report. Particular areas of control highlighted were github org ownership and domain name control.
+
+**Obol’s response:**
+
+These risks have been mitigated by adding an extra admin to the github org, and by setting up a second DNS stack in case the primary one fails, along with general Opsec improvements.
diff --git a/docs/versioned_docs/version-v0.18.0/sec/overview.md b/docs/versioned_docs/version-v0.18.0/sec/overview.md
new file mode 100644
index 0000000000..97e92727ed
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sec/overview.md
@@ -0,0 +1,33 @@
+---
+sidebar_position: 1
+description: Security Overview
+---
+
+# Overview
+
+This page serves as an overview of the Obol Network from a security point of view.
+
+This page is updated quarterly. The last update was on 2023-10-01.
+
+## Table of Contents
+
+1. [List of Security Audits and Assessments](overview.md#list-of-security-audits-and-assessments)
+2. [Security Focused Documents](overview.md#security-focused-documents)
+3. [Bug Bounty Details](bug-bounty.md)
+
+## List of Security Audits and Assessments
+
+The completed audits reports are linked [here](https://github.com/ObolNetwork/obol-security/tree/main/audits).
+
+* A review of Obol Labs [development processes](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/sec/ev-assessment/README.md) by Ethereal Ventures
+* A [security assessment](https://github.com/ObolNetwork/obol-security/blob/f9d7b0ad0bb8897f74ccb34cd4bd83012ad1d2b5/audits/Sigma_Prime_Obol_Network_Charon_Security_Assessment_Report_v2_1.pdf) of Charon by [Sigma Prime](https://sigmaprime.io/).
+* A [solidity audit](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/sec/smart_contract_audit/README.md) of the Obol Splits contracts by [Zach Obront](https://zachobront.com/).
+* A second audit of Charon is planned for Q4 2023.
+
+## Security focused documents
+
+* A [threat model](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.18.0/sec/threat_model/README.md) for a DV middleware client like charon.
+
+## Bug Bounty
+
+Information related to disclosing bugs and vulnerabilities to Obol can be found on [the next page](bug-bounty.md).
diff --git a/docs/versioned_docs/version-v0.18.0/sec/smart_contract_audit.md b/docs/versioned_docs/version-v0.18.0/sec/smart_contract_audit.md
new file mode 100644
index 0000000000..310f843be2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sec/smart_contract_audit.md
@@ -0,0 +1,477 @@
+---
+sidebar_position: 5
+description: Smart Contract Audit
+---
+
+# Smart Contract Audit
+
+| | |
+| ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|  | Obol Audit Report
Obol Manager Contracts
Prepared by: Zach Obront, Independent Security Researcher
Date: Sept 18 to 22, 2023
|
+
+## About **Obol**
+
+The Obol Network is an ecosystem for trust minimized staking that enables people to create, test, run & co-ordinate distributed validators.
+
+The Obol Manager contracts are responsible for distributing validator rewards and withdrawals among the validator and node operators involved in a distributed validator.
+
+## About **zachobront**
+
+Zach Obront is an independent smart contract security researcher. He serves as a Lead Senior Watson at Sherlock, a Security Researcher at Spearbit, and has identified multiple critical severity bugs in the wild, including in a Top 5 Protocol on Immunefi. You can say hi on Twitter at [@zachobront](http://twitter.com/zachobront).
+
+## Summary & Scope
+
+The [ObolNetwork/obol-manager-contracts](https://github.com/ObolNetwork/obol-manager-contracts/) repository was audited at commit [50ce277919723c80b96f6353fa8d1f8facda6e0e](https://github.com/ObolNetwork/obol-manager-contracts/tree/50ce277919723c80b96f6353fa8d1f8facda6e0e).
+
+The following contracts were in scope:
+
+* src/controllers/ImmutableSplitController.sol
+* src/controllers/ImmutableSplitControllerFactory.sol
+* src/lido/LidoSplit.sol
+* src/lido/LidoSplitFactory.sol
+* src/owr/OptimisticWithdrawalReceiver.sol
+* src/owr/OptimisticWithdrawalReceiverFactory.sol
+
+After completion of the fixes, the [2f4f059bfd145f5f05d794948c918d65d222c3a9](https://github.com/ObolNetwork/obol-manager-contracts/tree/2f4f059bfd145f5f05d794948c918d65d222c3a9) commit was reviewed. After this review, the updated Lido fee share system in [PR #96](https://github.com/ObolNetwork/obol-manager-contracts/pull/96/files) was reviewed.
+
+## Summary of Findings
+
+| Identifier | Title | Severity | Fixed |
+| :-----------------------------------------------------------------------------------------------------------------------: | -------------------------------------------------------------------------------------- | :-----------: | :---: |
+| [M-01](smart_contract_audit.md#m-01-future-fees-may-be-skirted-by-setting-a-non-eth-reward-token) | Future fees may be skirted by setting a non-ETH reward token | Medium | ✓ |
+| [M-02](smart_contract_audit.md#m-02-splits-with-256-or-more-node-operators-will-not-be-able-to-switch-on-fees) | Splits with 256 or more node operators will not be able to switch on fees | Medium | ✓ |
+| [M-03](smart_contract_audit.md#m-03-in-a-mass-slashing-event-node-operators-are-incentivized-to-get-slashed) | In a mass slashing event, node operators are incentivized to get slashed | Medium | |
+| [L-01](smart_contract_audit.md#l-01-obol-fees-will-be-applied-retroactively-to-all-non-distributed-funds-in-the-splitter) | Obol fees will be applied retroactively to all non-distributed funds in the Splitter | Low | ✓ |
+| [L-02](smart_contract_audit.md#l-02-if-owr-is-used-with-rebase-tokens-and-theres-a-negative-rebase-principal-can-be-lost) | If OWR is used with rebase tokens and there's a negative rebase, principal can be lost | Low | ✓ |
+| [L-03](smart_contract_audit.md#l-03-lidosplit-can-receive-eth-which-will-be-locked-in-contract) | LidoSplit can receive ETH, which will be locked in contract | Low | ✓ |
+| [L-04](smart_contract_audit.md#l-04-upgrade-to-latest-version-of-solady-to-fix-libclone-bug) | Upgrade to latest version of Solady to fix LibClone bug | Low | ✓ |
+| [G-01](smart_contract_audit.md#g-01-steth-and-wsteth-addresses-can-be-saved-on-implementation-to-save-gas) | stETH and wstETH addresses can be saved on implementation to save gas | Gas | ✓ |
+| [G-02](smart_contract_audit.md#g-02-owr-can-be-simplified-and-save-gas-by-not-tracking-distributedfunds) | OWR can be simplified and save gas by not tracking distributedFunds | Gas | ✓ |
+| [I-01](smart_contract_audit.md#i-01-strong-trust-assumptions-between-validators-and-node-operators) | Strong trust assumptions between validators and node operators | Informational | |
+| [I-02](smart_contract_audit.md#i-02-provide-node-operator-checklist-to-validate-setup) | Provide node operator checklist to validate setup | Informational | |
+
+## Detailed Findings
+
+### \[M-01] Future fees may be skirted by setting a non-ETH reward token
+
+Fees are planned to be implemented on the `rewardRecipient` splitter by updating to a new fee structure using the `ImmutableSplitController`.
+
+It is assumed that all rewards will flow through the splitter, because (a) all distributed rewards less than 16 ETH are sent to the `rewardRecipient`, and (b) even if a team waited for rewards to be greater than 16 ETH, rewards sent to the `principalRecipient` are capped at the `amountOfPrincipalStake`.
+
+This creates a fairly strong guarantee that reward funds will flow to the `rewardRecipient`. Even if a user were to set their `amountOfPrincipalStake` high enough that the `principalRecipient` could receive unlimited funds, the Obol team could call `distributeFunds()` when the balance got near 16 ETH to ensure fees were paid.
+
+However, if the user selects a non-ETH token, all ETH will be withdrawable only thorugh the `recoverFunds()` function. If they set up a split with their node operators as their `recoveryAddress`, all funds will be withdrawable via `recoverFunds()` without ever touching the `rewardRecipient` or paying a fee.
+
+#### Recommendation
+
+I would recommend removing the ability to use a non-ETH token from the `OptimisticWithdrawalRecipient`. Alternatively, if it feels like it may be a use case that is needed, it may make sense to always include ETH as a valid token, in addition to any `OWRToken` set.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[M-02] Splits with 256 or more node operators will not be able to switch on fees
+
+0xSplits is used to distribute rewards across node operators. All Splits are deployed with an ImmutableSplitController, which is given permissions to update the split one time to add a fee for Obol at a future date.
+
+The Factory deploys these controllers as Clones with Immutable Args, hard coding the `owner`, `accounts`, `percentAllocations`, and `distributorFee` for the future update. This data is packed as follows:
+
+```solidity
+ function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+ ) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
+ uint256[] memory recipients = new uint[](recipientsSize);
+
+ uint256 i = 0;
+ for (; i < recipientsSize;) {
+ recipients[i] = (uint256(percentAllocations[i]) << ADDRESS_BITS) | uint256(uint160(accounts[i]));
+
+ unchecked {
+ i++;
+ }
+ }
+
+ data = abi.encodePacked(splitMain, distributorFee, owner, uint8(recipientsSize), recipients);
+ }
+```
+
+In the process, `recipientsSize` is unsafely downcasted into a `uint8`, which has a maximum value of `256`. As a result, any values greater than 256 will overflow and result in a lower value of `recipients.length % 256` being passed as `recipientsSize`.
+
+When the Controller is deployed, the full list of `percentAllocations` is passed to the `validSplit` check, which will pass as expected. However, later, when `updateSplit()` is called, the `getNewSplitConfiguation()` function will only return the first `recipientsSize` accounts, ignoring the rest.
+
+```solidity
+ function getNewSplitConfiguration()
+ public
+ pure
+ returns (address[] memory accounts, uint32[] memory percentAllocations)
+ {
+ // fetch the size first
+ // then parse the data gradually
+ uint256 size = _recipientsSize();
+ accounts = new address[](size);
+ percentAllocations = new uint32[](size);
+
+ uint256 i = 0;
+ for (; i < size;) {
+ uint256 recipient = _getRecipient(i);
+ accounts[i] = address(uint160(recipient));
+ percentAllocations[i] = uint32(recipient >> ADDRESS_BITS);
+ unchecked {
+ i++;
+ }
+ }
+ }
+```
+
+When `updateSplit()` is eventually called on `splitsMain` to turn on fees, the `validSplit()` check on that contract will revert because the sum of the percent allocations will no longer sum to `1e6`, and the update will not be possible.
+
+#### Proof of Concept
+
+The following test can be dropped into a file in `src/test` to demonstrate that passing 400 accounts will result in a `recipientSize` of `400 - 256 = 144`:
+
+```solidity
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.0;
+
+import { Test } from "forge-std/Test.sol";
+import { console } from "forge-std/console.sol";
+import { ImmutableSplitControllerFactory } from "src/controllers/ImmutableSplitControllerFactory.sol";
+import { ImmutableSplitController } from "src/controllers/ImmutableSplitController.sol";
+
+interface ISplitsMain {
+ function createSplit(address[] calldata accounts, uint32[] calldata percentAllocations, uint32 distributorFee, address controller) external returns (address);
+}
+
+contract ZachTest is Test {
+ function testZach_RecipientSizeCappedAt256Accounts() public {
+ vm.createSelectFork("https://mainnet.infura.io/v3/fb419f740b7e401bad5bec77d0d285a5");
+
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](400);
+ uint32[] memory bigPercentAllocations = new uint32[](400);
+
+ for (uint i = 0; i < 400; i++) {
+ bigAccounts[i] = address(uint160(i));
+ bigPercentAllocations[i] = 2500;
+ }
+
+ // confirmation that 0xSplits will allow creating a split with this many accounts
+ // dummy acct passed as controller, but doesn't matter for these purposes
+ address split = ISplitsMain(0x2ed6c4B5dA6378c7897AC67Ba9e43102Feb694EE).createSplit(bigAccounts, bigPercentAllocations, 0, address(8888));
+
+ ImmutableSplitController controller = factory.createController(split, owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+
+ // added a public function to controller to read recipient size directly
+ uint savedRecipientSize = controller.ZachTest__recipientSize();
+ assert(savedRecipientSize < 400);
+ console.log(savedRecipientSize); // 144
+ }
+}
+```
+
+#### Recommendation
+
+When packing the data in `_packSplitControllerData()`, check `recipientsSize` before downcasting to a uint8:
+
+```diff
+function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
++ if (recipientsSize > 256) revert InvalidSplit__TooManyAccounts(recipientSize);
+ ...
+}
+```
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[M-03] In a mass slashing event, node operators are incentivized to get slashed
+
+When the `OptimisticWithdrawalRecipient` receives funds from the beacon chain, it uses the following rule to determine the allocation:
+
+> If the amount of funds to be distributed is greater than or equal to 16 ether, it is assumed that it is a withdrawal (to be returned to the principal, with a cap on principal withdrawals of the total amount they deposited).
+
+> Otherwise, it is assumed that the funds are rewards.
+
+This value being as low as 16 ether protects against any predictable attack the node operator could perform. For example, due to the effect of hysteresis in updating effective balances, it does not seem to be possible for node operators to predictably bleed a withdrawal down to be below 16 ether (even if they timed a slashing perfectly).
+
+However, in the event of a mass slashing event, slashing punishments can be much more severe than they otherwise would be. To calculate the size of a slash, we:
+
+* take the total percentage of validator stake slashed in the 18 days preceding and following a user's slash
+* multiply this percentage by 3 (capped at 100%)
+* the full slashing penalty for a given validator equals 1/32 of their stake, plus the resulting percentage above applied to the remaining 31/32 of their stake
+
+In order for such penalties to bring the withdrawal balance below 16 ether (assuming a full 32 ether to start), we would need the percentage taken to be greater than `15 / 31 = 48.3%`, which implies that `48.3 / 3 = 16.1%` of validators would need to be slashed.
+
+Because the measurement is taken from the 18 days before and after the incident, node operators would have the opportunity to see a mass slashing event unfold, and later decide that they would like to be slashed along with it.
+
+In the event that they observed that greater than 16.1% of validators were slashed, Obol node operators would be able to get themselves slashed, be exited with a withdrawal of less than 16 ether, and claim that withdrawal as rewards, effectively stealing from the principal recipient.
+
+#### Recommendations
+
+Find a solution that provides a higher level of guarantee that the funds withdrawn are actually rewards, and not a withdrawal.
+
+#### Review
+
+Acknowledged. We believe this is a black swan event. It would require a major ETH client to be compromised, and would be a betrayal of trust, so likely not EV+ for doxxed operators. Users of this contract with unknown operators should be wary of such a risk.
+
+### \[L-01] Obol fees will be applied retroactively to all non-distributed funds in the Splitter
+
+When Obol decides to turn on fees, a call will be made to `ImmutableSplitController::updateSplit()`, which will take the predefined split parameters (the original user specified split with Obol's fees added in) and call `updateSplit()` to implement the change.
+
+```solidity
+function updateSplit() external payable {
+ if (msg.sender != owner()) revert Unauthorized();
+
+ (address[] memory accounts, uint32[] memory percentAllocations) = getNewSplitConfiguration();
+
+ ISplitMain(splitMain()).updateSplit(split, accounts, percentAllocations, uint32(distributorFee()));
+}
+```
+
+If we look at the code on `SplitsMain`, we can see that this `updateSplit()` function is applied retroactively to all funds that are already in the split, because it updates the parameters without performing a distribution first:
+
+```solidity
+function updateSplit(
+ address split,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+)
+ external
+ override
+ onlySplitController(split)
+ validSplit(accounts, percentAllocations, distributorFee)
+{
+ _updateSplit(split, accounts, percentAllocations, distributorFee);
+}
+```
+
+This means that any funds that have been sent to the split but have not yet be distributed will be subject to the Obol fee. Since these splitters will be accumulating all execution layer fees, it is possible that some of them may have received large MEV bribes, where this after-the-fact fee could be quite expensive.
+
+#### Recommendation
+
+The most strict solution would be for the `ImmutableSplitController` to store both the old split parameters and the new parameters. The old parameters could first be used to call `distributeETH()` on the split, and then `updateSplit()` could be called with the new parameters.
+
+If storing both sets of values seems too complex, the alternative would be to require that `split.balance <= 1` to update the split. Then the Obol team could simply store the old parameters off chain to call `distributeETH()` on each split to "unlock" it to update the fees.
+
+(Note that for the second solution, the ETH balance should be less than or equal to 1, not 0, because 0xSplits stores empty balances as `1` for gas savings.)
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[L-02] If OWR is used with rebase tokens and there's a negative rebase, principal can be lost
+
+The `OptimisticWithdrawalRecipient` is deployed with a specific token immutably set on the clone. It is presumed that that token will usually be ETH, but it can also be an ERC20 to account for future integrations with tokenized versions of ETH.
+
+In the event that one of these integrations used a rebasing version of ETH (like `stETH`), the architecture would need to be set up as follows:
+
+`OptimisticWithdrawalRecipient => rewards to something like LidoSplit.sol => Split Wallet`
+
+In this case, the OWR would need to be able to handle rebasing tokens.
+
+In the event that rebasing tokens are used, there is the risk that slashing or inactivity leads to a period with a negative rebase. In this case, the following chain of events could happen:
+
+* `distribute(PULL)` is called, setting `fundsPendingWithdrawal == balance`
+* rebasing causes the balance to decrease slightly
+* `distribute(PULL)` is called again, so when `fundsToBeDistributed = balance - fundsPendingWithdrawal` is calculated in an unchecked block, it ends up being near `type(uint256).max`
+* since this is more than `16 ether`, the first `amountOfPrincipalStake - _claimedPrincipalFunds` will be allocated to the principal recipient, and the rest to the reward recipient
+* we check that `endingDistributedFunds <= type(uint128).max`, but unfortunately this check misses the issue, because only `fundsToBeDistributed` underflows, not `endingDistributedFunds`
+* `_claimedPrincipalFunds` is set to `amountOfPrincipalStake`, so all future claims will go to the reward recipient
+* the `pullBalances` for both recipients will be set higher than the balance of the contract, and so will be unusable
+
+In this situation, the only way for the principal to get their funds back would be for the full `amountOfPrincipalStake` to hit the contract at once, and for them to call `withdraw()` before anyone called `distribute(PUSH)`. If anyone was to be able to call `distribute(PUSH)` before them, all principal would be sent to the reward recipient instead.
+
+#### Recommendation
+
+Similar to #74, I would recommend removing the ability for the `OptimisticWithdrawalRecipient` to accept non-ETH tokens.
+
+Otherwise, I would recommend two changes for redundant safety:
+
+1. Do not allow the OWR to be used with rebasing tokens.
+2. Move the `_fundsToBeDistributed = _endingDistributedFunds - _startingDistributedFunds;` out of the unchecked block. The case where `_endingDistributedFunds` underflows is already handled by a later check, so this one change should be sufficient to prevent any risk of this issue.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[L-03] LidoSplit can receive ETH, which will be locked in contract
+
+Each new `LidoSplit` is deployed as a clone, which comes with a `receive()` function for receiving ETH.
+
+However, the only function on `LidoSplit` is `distribute()`, which converts `stETH` to `wstETH` and transfers it to the `splitWallet`.
+
+While this contract should only be used for Lido to pay out rewards (which will come in `stETH`), it seems possible that users may accidentally use the same contract to receive other validator rewards (in ETH), or that Lido governance may introduce ETH payments in the future, which would cause the funds to be locked.
+
+#### Proof of Concept
+
+The following test can be dropped into `LidoSplit.t.sol` to confirm that the clones can currently receive ETH:
+
+```solidity
+function testZach_CanReceiveEth() public {
+ uint before = address(lidoSplit).balance;
+ payable(address(lidoSplit)).transfer(1 ether);
+ assertEq(address(lidoSplit).balance, before + 1 ether);
+}
+```
+
+#### Recommendation
+
+Introduce an additional function to `LidoSplit.sol` which wraps ETH into stETH before calling `distribute()`, in order to rescue any ETH accidentally sent to the contract.
+
+#### Review
+
+Fixed in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87/files) by adding a `rescueFunds()` function that can send ETH or any ERC20 (except `stETH` or `wstETH`) to the `splitWallet`.
+
+### \[L-04] Upgrade to latest version of Solady to fix LibClone bug
+
+In the recent [Solady audit](https://github.com/Vectorized/solady/blob/main/audits/cantina-solady-report.pdf), an issue was found the affects LibClone.
+
+In short, LibClone assumes that the length of the immutable arguments on the clone will fit in 2 bytes. If it's larger, it overlaps other op codes and can lead to strange behaviors, including causing the deployment to fail or causing the deployment to succeed with no resulting bytecode.
+
+Because the `ImmutableSplitControllerFactory` allows the user to input arrays of any length that will be encoded as immutable arguments on the Clone, we can manipulate the length to accomplish these goals.
+
+Fortunately, failed deployments or empty bytecode (which causes a revert when `init()` is called) are not problems in this case, as the transactions will fail, and it can only happen with unrealistically long arrays that would only be used by malicious users.
+
+However, it is difficult to be sure how else this risk might be exploited by using the overflow to jump to later op codes, and it is recommended to update to a newer version of Solady where the issue has been resolved.
+
+#### Proof of Concept
+
+If we comment out the `init()` call in the `createController()` call, we can see that the following test "successfully" deploys the controller, but the result is that there is no bytecode:
+
+```solidity
+function testZach__CreateControllerSoladyBug() public {
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](28672);
+ uint32[] memory bigPercentAllocations = new uint32[](28672);
+
+ for (uint i = 0; i < 28672; i++) {
+ bigAccounts[i] = address(uint160(i));
+ if (i < 32) bigPercentAllocations[i] = 820;
+ else bigPercentAllocations[i] = 34;
+ }
+
+ ImmutableSplitController controller = factory.createController(address(8888), owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+ assert(address(controller) != address(0));
+ assert(address(controller).code.length == 0);
+}
+```
+
+#### Recommendation
+
+Delete Solady and clone it from the most recent commit, or any commit after the fixes from [PR #548](https://github.com/Vectorized/solady/pull/548/files#diff-27a3ba4730de4b778ecba4697ab7dfb9b4f30f9e3666d1e5665b194fe6c9ae45) were merged.
+
+#### Review
+
+Solady has been updated to v.0.0.123 in [PR 88](https://github.com/ObolNetwork/obol-manager-contracts/pull/88).
+
+### \[G-01] stETH and wstETH addresses can be saved on implementation to save gas
+
+The `LidoSplitFactory` contract holds two immutable values for the addresses of the `stETH` and `wstETH` tokens.
+
+When new clones are deployed, these values are encoded as immutable args. This adds the values to the contract code of the clone, so that each time a call is made, they are passed as calldata along to the implementation, which reads the values from the calldata for use.
+
+Since these values will be consistent across all clones on the same chain, it would be more gas efficient to store them in the implementation directly, which can be done with `immutable` storage values, set in the constructor.
+
+This would save 40 bytes of calldata on each call to the clone, which leads to a savings of approximately 640 gas on each call.
+
+#### Recommendation
+
+1. Add the following to `LidoSplit.sol`:
+
+```solidity
+address immutable public stETH;
+address immutable public wstETH;
+```
+
+2. Add a constructor to `LidoSplit.sol` which sets these immutable values. Solidity treats immutable values as constants and stores them directly in the contract bytecode, so they will be accessible from the clones.
+3. Remove `stETH` and `wstETH` from `LidoSplitFactory.sol`, both as storage values, arguments to the constructor, and arguments to `clone()`.
+4. Adjust the `distribute()` function in `LidoSplit.sol` to read the storage values for these two addresses, and remove the helper functions to read the clone's immutable arguments for these two values.
+
+#### Review
+
+Fixed as recommended in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87).
+
+### \[G-02] OWR can be simplified and save gas by not tracking distributedFunds
+
+Currently, the `OptimisticWithdrawalRecipient` contract tracks four variables:
+
+* distributedFunds: total amount of the token distributed via push or pull
+* fundsPendingWithdrawal: total balance distributed via pull that haven't been claimed yet
+* claimedPrincipalFunds: total amount of funds claimed by the principal recipient
+* pullBalances: individual pull balances that haven't been claimed yet
+
+When `_distributeFunds()` is called, we perform the following math (simplified to only include relevant updates):
+
+```solidity
+endingDistributedFunds = distributedFunds - fundsPendingWithdrawal + currentBalance;
+fundsToBeDistributed = endingDistributedFunds - distributedFunds;
+distributedFunds = endingDistributedFunds;
+```
+
+As we can see, `distributedFunds` is added to the `endingDistributedFunds` variable and then removed when calculating `fundsToBeDistributed`, having no impact on the resulting `fundsToBeDistributed` value.
+
+The `distributedFunds` variable is not read or used anywhere else on the contract.
+
+#### Recommendation
+
+We can simplify the math and save substantial gas (a storage write plus additional operations) by not tracking this value at all.
+
+This would allow us to calculate `fundsToBeDistributed` directly, as follows:
+
+```solidity
+fundsToBeDistributed = currentBalance - fundsPendingWithdrawal;
+```
+
+#### Review
+
+Fixed as recommended in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85).
+
+### \[I-01] Strong trust assumptions between validators and node operators
+
+It is assumed that validators and node operators will always act in the best interest of the group, rather than in their selfish best interest.
+
+It is important to make clear to users that there are strong trust assumptions between the various parties involved in the DVT.
+
+Here are a select few examples of attacks that a malicious set of node operators could perform:
+
+1. Since there is currently no mechanism for withdrawals besides the consensus of the node operators, a minority of them sufficient to withhold consensus could blackmail the principal for a payment of up to 16 ether in order to allow them to withdraw. Otherwise, they could turn off their node operators and force the principal to bleed down to a final withdrawn balance of just over 16 ether.
+2. Node operators are all able to propose blocks within the P2P network, which are then propogated out to the rest of the network. Node software is accustomed to signing for blocks built by block builders based on the metadata including quantity of fees and the address they'll be sent to. This is enforced by social consensus, with block builders not wanting to harm validators in order to have their blocks accepted in the future. However, node operators in a DVT are not concerned with the social consensus of the network, and could therefore build blocks that include large MEV payments to their personal address (instead of the DVT's 0xSplit), add fictious metadata to the block header, have their fellow node operators accept the block, and take the MEV for themselves.
+3. While the withdrawal address is immutably set on the beacon chain to the OWR, the fee address is added by the nodes to each block. Any majority of node operators sufficient to reach consensus could create a new 0xSplit with only themselves on it, and use that for all execution layer fees. The principal (and other node operators) would not be able to stop them or withdraw their principal, and would be stuck with staked funds paying fees to the malicious node operators.
+
+Note that there are likely many other possible attacks that malicious node operators could perform. This report is intended to demonstrate some examples of the trust level that is needed between validators and node operators, and to emphasize the importance of making these assumptions clear to users.
+
+#### Review
+
+Acknowledged. We believe EIP 7002 will reduce this trust assumption as it would enable the validator exit via the execution layer withdrawal key.
+
+### \[I-02] Provide node operator checklist to validate setup
+
+There are a number of ways that the user setting up the DVT could plant backdoors to harm the other users involved in the DVT.
+
+Each of these risks is possible to check before signing off on the setup, but some are rather hidden, so it would be useful for the protocol to provide a list of checks that node operators should do before signing off on the setup parameters (or, even better, provide these checks for them through the front end).
+
+1. Confirm that `SplitsMain.getHash(split)` matches the hash of the parameters that the user is expecting to be used.
+2. Confirm that the controller clone delegates to the correct implementation. If not, it could be pointed to delegate to `SplitMain` and then called to `transferControl()` to a user's own address, allowing them to update the split arbitrarily.
+3. `OptimisticWithdrawalRecipient.getTranches()` should be called to check that `amountOfPrincipalStake` is equal to the amount that they will actually be providing.
+4. The controller's `owner` and future split including Obol fees should be provided to the user. They should be able to check that `ImmutableSplitControllerFactory.predictSplitControllerAddress()`, with those parameters inputted, results in the controller that is actually listed on `SplitsMain.getController(split)`.
+
+#### Review
+
+Acknowledged. We do some of these already (will add the remainder) automatically in the launchpad UI during the cluster confirmation phase by the node operator. We will also add it in markdown to the repo.
diff --git a/docs/versioned_docs/version-v0.18.0/sec/threat_model.md b/docs/versioned_docs/version-v0.18.0/sec/threat_model.md
new file mode 100644
index 0000000000..fbca3c7ce8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/sec/threat_model.md
@@ -0,0 +1,155 @@
+---
+sidebar_position: 6
+description: Threat model for a Distributed Validator
+---
+
+# Charon threat model
+
+This page outlines a threat model for Charon, in the context of it being a Distributed Validator middleware for Ethereum validator clients.
+
+## Actors
+
+- Node owner (NO)
+- Cluster node operators (CNO)
+- Rogue node operator (RNO)
+- Outside attacker (OA)
+
+## General observations
+
+This page describes some considerations the Obol core team made about the security of a distributed validator in the context of its deployment and interaction with outside actors.
+
+The goal of this threat model is to provide transparency, but it is by no means a comprehensive audit or complete security reference. It’s a sharing of the experiences and thoughts we gained during the last few years building distributed validator technologies.
+
+While to the Beacon Chain, a distributed validator is seen in much the same way as a regular validator, and thus retains some of the same security considerations, Charon’s threat model is different from a validator client’s threat model because of its general design.
+
+While a validator client owns and operates on a set of validator private keys, the design of Charon allows its node operators to rarely (if ever) see the complete validator private keys, relying instead on modern cryptography to generate partial private key shares.
+
+An Ethereum distributed validator employs advanced signature primitives such that no operator ever handles the full validator private key in any standard lifecycle step: the [BLS digital signature scheme](https://en.wikipedia.org/wiki/BLS_digital_signature) employed by the Ethereum network allows distributed validators to individually sign a blob of data and then aggregate the resulting signatures in a transparent manner, never requiring any of the participating parties to know the full private key to do so.
+
+If the subset of the available Charon nodes is lower than a given threshold, the cluster is not able to continue with its duties.
+
+Given the collaborative nature of a Distributed Validator cluster, every operator must prioritize the liveliness and well-being of the cluster. Charon, at the moment of writing this page cannot reward and penalize operators within a cluster independently.
+
+This implies that Charon’s threat model can’t quite be equated to that of a single validator client, since they work on a different - albeit similar - set of security concepts.
+
+## Identity private key
+
+A distributed validator cluster is made up of a number of nodes, often run by a number of independent operators. For each DV cluster there’s a set of Ethereum validator private keys on which they want to validate on behalf of.
+
+Alongside those, each node (henceforth ‘operator’) holds an SECP256K1 identity private key, referred to as an ENR, that identifies their node to the other cluster operators’ nodes.
+
+Exfiltration of said private key could lead to possible impersonation from an outside attacker, possibly leading to intra-cluster peering issues, eclipse attack risks, and degraded validator performance.
+
+Charon client communication is handled via BFT consensus, which is able to tolerate a given number of misbehaving nodes up to a certain threshold: once this threshold is reached, the cluster is not able to continue with its lifecycle and loses liveness guarantees (the validator goes offline). If more than two-thirds of nodes in a cluster are malicious, a cluster also loses safety guarantees (enough bad actors could collude to come to consensus on something slashable).
+
+Identity private key theft and the subsequent execution of a rogue cluster node is equivalent in the context of BFT consensus to a misbehaving node, hence the cluster can survive and continue with its duties up to what’s specified by the cluster’s BFT protocol’s parameters.
+
+The likelihood of this happening is low: an OA with enough knowledge of the topology of the operator’s network must steal `fault tolerance of the cluster + 1` identity private keys and run Charon nodes to subvert the distributed validator BFT consensus to push the validator offline.
+
+## Ethereum validator private key access
+
+A distributed validator cluster executes Ethereum validator duties by acting as a middleman between the beacon chain and a validator client.
+
+To do so, the cluster must have knowledge of the Ethereum validator’s private key.
+
+The design and implementation of Charon minimizes the chances of this by splitting the Ethereum validator private keys into parts, which are then assigned to each node operator.
+A [distributed key generation](https://en.wikipedia.org/wiki/Distributed_key_generation) (DKG) process is used in order to evenly and safely create the private key shares without any central party having access to the full private key.
+
+The cryptography primitives employed in Charon can allow a threshold of the node operator’s private key shares to be reconstructed into the whole validator private key if needed.
+
+While the facilities to do this are present in the form of CLI commands, as stated before Charon never reconstructs the key in normal operations since BLS digital signature system allows for signature aggregation.
+
+A distributed validator cluster can be started in two ways:
+
+1. An existing Ethereum validator private key is split by the private key holder, and distributed in a trusted manner among the operators.
+2. The operators participate in a distributed key generation (DKG) process, to create private key shares that collectively can be used to sign validation duties as an Ethereum distributed validator. The full private key for the cluster never exists in one place during or after the DKG.
+
+In case 1, one of the node operators K has direct access to the Ethereum validator key and is tasked with the generation of other operator’s identity keys and key shards.
+
+It is clear that in this case the entirety of the sensitive material set is as secure as K’s environment; if K is compromised or malicious, the distributed validator could be slashed.
+
+Case 2 is different, because there’s no pre-existing Ethereum validator key in a single operator's hands: it will be generated using the FROST DKG algorithm.
+
+Assuming a successful DKG process, each operator will only ever handle its own key shares instead of the full Ethereum validator private key.
+
+A set of rogue operators composed of enough members to reconstruct the original Ethereum private keys might pose the risk of slashing for a distributed validator by colluding to produce slashable messages together.
+
+We deem this scenario’s likelihood as low, as it would mean that node operators decided to willfully slash the stake that they should be being rewarded for staking.
+
+Still, in the context of an outside attack, purposefully slashing a validator would mean stealing multiple operator key shares, which in turn means violating many cluster operator’s security almost at the same time. This scenario may occur if there is a 0-day vulnerability in a piece of software they all run or in case of node misconfiguration.
+
+## Rogue node operator
+
+Nodes are connected by means of either relay nodes, or directly to one another.
+
+Each node operator is at risk of being impeded by other nodes or by the relay operator in the execution of their duties.
+
+Nodes need to expose a set of TCP ports to be able to work, and the mere fact of doing that opens up the opportunity for rogue parties to execute DDoS attacks.
+
+Another attack surface for the cluster exists in rogue nodes purposefully filling the various inter-state databases with meaningless data, or more generally submitting bogus information to the other parties to slow down the processing or, in the case of a sybil attack, bring the cluster to a halt.
+
+The likelihood of this scenario is medium, because there’s no active threat hunting part: there’s no need for the rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle.
+
+## Outside attackers interfering with a cluster
+
+There are two levels of sophistication in an OA:
+
+1. No knowledge of the topology of the cluster: The attacker doesn’t know where each cluster node is located and so can’t force fault tolerance +1 nodes offline if it can’t find them.
+2. Knowledge of the topology of the network (or part of it) is possessed: the OA can operate DDoS attacks or try breaking into node’s servers - at that point, the “rogue node operator” scenario applies.
+
+The likelihood of this scenario is low: an OA needs extensive capabilities and sufficient incentive to be able to carry out an attack of this size.
+
+An outside attacker could also find and use vulnerabilities in the underlying cryptosystems and cryptography libraries used by Charon and other Ethereum clients. Forging signatures that fool Charon’s cryptographic library or other dependencies may be feasible, but forging signatures or otherwise finding a vulnerability in either the SECP256K1+ECDSA or BLS12-381+BLS cryptosystems we deem to be a low likelihood risk.
+
+## Malicious beacon nodes
+
+A malicious beacon node (BN) could prevent the distributed validator from operating its validation duties, and could plausibly increase the likelihood of slashing by serving charon illegitimate information.
+
+If the amount of nodes configured with the malicious BN are equal to the byzantine threshold for the Charon BFT consensus protocol, the validation process can potentially halt since the BFT parameter threshold is reached - most of the nodes are byzantine - the system will reach consensus on a set of data that isn’t valid.
+
+We deem the likelihood of this scenario to be medium depending on the trust model associated with the BNs deployment (cloud, self-hosted, SaaS product): node operators should always host or at least trust their own beacon nodes.
+
+## Malicious charon relays
+
+A Charon relay is used as a communication bridge between nodes that aren’t directly exposed on the Internet. It also acts as the peer discovery mechanism for a cluster.
+
+Once a peer’s IP address has been discovered via the relay, a direct connection can be attempted. Nodes can either communicate by exchanging data through a relay, or by using the relay as a means to establish a direct TCP connection to one another.
+
+A malicious relay owned by a OA could lead to:
+
+- Network topology discovery, facilitating the “outside attackers interactions with a cluster” scenario
+- Impeding node communication, potentially impacting the BFT consensus protocol liveness (not security) and distributed validator duties
+- DKG process disruption leading to frustration and potential abandonment by node operators: could lead to the usage of a standard Ethereum validator setup, which implies weaker security overall
+
+We note that BFT consensus liveness disruption can only happen if the number of nodes using the malicious relay for communication is equal to the byzantine nodes amount defined in the consensus parameters.
+
+This risk can be mitigated by configuring nodes with multiple relay URLs from only [trusted entities](../int/quickstart/advanced/self-relay.md).
+
+The likelihood of this scenario is medium: Charon nodes are configured with a default set of relay nodes, so if an OA were to compromise those, it would lead to many cluster topologies getting discovered and potentially attacked and disrupted.
+
+## Compromised runtime files
+
+Charon operates with two runtime files:
+
+- A lock file used to address operator’s nodes, define the Ethereum validator public keys and the public key shares associated with it
+- A cluster definition file used to define the operator’s addresses and identities during the DKG process
+
+The lock file is signed and validated by all the nodes participating in the cluster: assuming good security practices on the node operator side, and no bugs in Charon or its dependencies’ implementations, this scenario is unlikely.
+
+If one or more node operators are using less than ideal security practices an OA could rewire the Charon CLI flags to include the `--no-verify` flags, which disables lock file signature and hash verification (usually intended only for development purposes).
+
+By doing that, the OA can edit the lock file as it sees fit, leading to the “rogue node operator” scenario. An OA or RNO might also manage to social engineer their way into convincing other operators into running their malicious lock file with verification disabled.
+
+The likelihood of this scenario is low: an OA would need to compromise every node operator through social engineering to both use a different set of files, and to run its cluster with `--no-verify`.
+
+## Conclusions
+
+Distributed Validator Technology (DVT) helps maintain a high-assurance environment for Ethereum validators by leveraging modern cryptography to ensure no single point of failure is easily found in the system.
+
+As with any computing system, security considerations are to be expected in order to keep the environment safe.
+
+From the point of view of an Ethereum validator entity, running their services with a DV client can help greatly with availability, minimizing slashing risks, and maximizing participation in the network.
+
+On the other hand, one must take into consideration the risks involved with dishonest cluster operators, as well as rogue third-party beacon nodes or relay providers.
+
+In the end, we believe the benefits of DVT greatly outweigh the potential threats described in this overview.
diff --git a/docs/versioned_docs/version-v0.18.0/testnet.md b/docs/versioned_docs/version-v0.18.0/testnet.md
new file mode 100644
index 0000000000..f430d00a0b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.18.0/testnet.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 6
+description: Obol testnets roadmap
+---
+
+# Testnets
+
+Over the coming quarters, Obol Labs has and will continue to coordinate and host a number of progressively larger testnets to help harden the Charon client and iterate on the key generation tooling.
+
+The following is a breakdown of the intended testnet roadmap, the features that are to be completed by each testnet, and their target start date and duration.
+
+# Testnets
+
+- [x] [Dev Net 1](#devnet-1)
+- [x] [Dev Net 2](#devnet-2)
+- [x] [Athena Public Testnet 1](#athena-public-testnet-1)
+- [x] [Bia Public Testnet 2](#bia-public-testnet-2)
+
+## Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using `docker compose` to spin up 4 Charon clients and 4 different validator clients on a single machine, with a remote consensus client. The keys were created locally in Charon and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+- A single user completes a first tutorial alone, using `docker compose` to spin up 4 Charon clients on a single machine, with a remote consensus client. The keys are created locally in Charon and activated with the existing launchpad.
+- Prove that the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works.
+- Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to Charon running across a network.
+
+## Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows **together** for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+
+This devnet was the first time `charon dkg` was tested with users. A core focus of this devnet was to collect network performance data.
+
+This was also the first time Charon was run in variable, non-virtual networks (i.e. the real internet).
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+- Groups of 4 testers complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients, each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+- Operators avoid exposing Charon to the public internet on a static IP address through the use of Obol-hosted relay nodes.
+- Users test `charon dkg`. The launchpad is not used, and this dkg is triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+- Effective collection of network performance data, to enable gathering even higher signal performance data at scale during public testnets.
+- Block proposals are in place.
+
+## Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 was to get distributed validators into the hands of the wider Obol Community for the first time. The core focus of this testnet was the onboarding experience.
+
+The core output from this testnet was a significant number of public cluster running and public feedback collected.
+
+This was an unincentivized testnet and formed the basis for us to figure out a Sybil resistance mechanism.
+
+**Participants:** Obol Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Completed date:** October 2022
+
+**Duration:** 2 weeks cluster setup, 8 weeks operation
+
+**Goals:**
+
+- Get distributed validators into the hands of the Obol Early Community for the first time.
+- Create the first public onboarding experience and gather feedback. This is the first time we need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) as possible.
+- Make deploying Ethereum validator nodes accessible using the CLI.
+- Generate a backlog of bugs, feature requests, platform requests and integration requests.
+
+## Bia Public Testnet 2
+
+This second public testnet intends to take the learning from Athena and scale the network by engaging both the wider at-home validator community and professional operators. This is the first time users are setting up DVs using the DV launchpad.
+
+This testnet is also important for learning the conditions Charon will be subjected to in production. A core output of this testnet is a large number of autonomous public DV clusters running and building up the Obol community with technical ambassadors.
+
+**Participants:** Obol Community, Ethereum staking community
+
+**State:** MVP
+
+**Network:** Görli
+
+**Target Completed date:** March 2023
+
+**Duration:** 2 weeks cluster setup, 4-8 weeks operation
+
+**Goals:**
+
+- Engage the wider Solo and Professional Ethereum Staking Community.
+- Get integration feedback.
+- Build confidence in Charon after running DVs on an Ethereum testnet.
+- Learn about the conditions Charon will be subjected to in production.
+- Distributed Validator returns are competitive versus single validator clients.
+- Make deploying Ethereum validator nodes accessible using the DV Launchpad.
+- Build comprehensive guides for various profiles to spin up DVs with minimal supervision from the core team.
diff --git a/docs/versioned_docs/version-v0.19.0/README.md b/docs/versioned_docs/version-v0.19.0/README.md
new file mode 100644
index 0000000000..5ec2ea8530
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.19.0
+
diff --git a/docs/versioned_docs/version-v0.19.0/cg/README.md b/docs/versioned_docs/version-v0.19.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.19.0/cg/bug-report.md b/docs/versioned_docs/version-v0.19.0/cg/bug-report.md
new file mode 100644
index 0000000000..9a10b3b553
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hindrance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualize the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behavior
+
+
+## Current Behavior
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.19.0/cg/feedback.md b/docs/versioned_docs/version-v0.19.0/cg/feedback.md
new file mode 100644
index 0000000000..76042e28aa
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/cg/feedback.md
@@ -0,0 +1,5 @@
+# Feedback
+
+If you have followed our quickstart guides, and whether you succeeded or failed at running the distributed validator successfully, we would like to hear your feedback on the process and where you encountered difficulties.
+- Please let us know by joining and posting on our [Discord](https://discord.gg/n6ebKsX46w).
+- Also, feel free to add issues to our [GitHub repos](https://github.com/ObolNetwork).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/charon/README.md b/docs/versioned_docs/version-v0.19.0/charon/README.md
new file mode 100644
index 0000000000..44b46f1797
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/charon/README.md
@@ -0,0 +1,2 @@
+# charon
+
diff --git a/docs/versioned_docs/version-v0.19.0/charon/charon-cli-reference.md b/docs/versioned_docs/version-v0.19.0/charon/charon-cli-reference.md
new file mode 100644
index 0000000000..3637cd7ee7
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/charon/charon-cli-reference.md
@@ -0,0 +1,362 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+sidebar_position: 5
+---
+
+# CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.19.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.19.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+The following are the top-level commands available to use.
+
+```markdown
+charon --help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ alpha Alpha subcommands provide early access to in-development features
+ combine Combines the private key shares of a distributed validator cluster into a set of standard validator private keys.
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Prints a new ENR for this node
+ help Help about any command
+ relay Start a libp2p relay server
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+## The `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+```
+
+### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+```
+
+### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster-lock.json` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --cluster-dir string The target folder to create the cluster in. (default "./")
+ --definition-file string Optional path to a cluster definition file or an HTTP URL. This overrides all other configuration flags.
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for cluster
+ --insecure-keys Generates insecure keystore files. This should never be used. It is not supported on mainnet.
+ --keymanager-addresses strings Comma separated list of keymanager URLs to import validator key shares to. Note that multiple addresses are required, one for each node in the cluster, with node0's keyshares being imported to the first address, node1's keyshares to the second, and so on.
+ --keymanager-auth-tokens strings Authentication bearer tokens to interact with the keymanager URLs. Don't include the "Bearer" symbol, only include the api-token.
+ --name string The cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky.
+ --nodes int The number of charon nodes in the cluster. Minimum is 3.
+ --num-validators int The number of distributed validators needed in the cluster.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version of the custom test network (in hex).
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+ --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, frost (default "default")
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky. (default "mainnet")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings [REQUIRED] Comma-separated list of each operator's Charon ENR address.
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+## The `dkg` subcommand
+
+### Performing a DKG Ceremony
+
+The `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit data for each new distributed validator. The command outputs the `cluster-lock.json` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file or an HTTP URL. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --keymanager-address string The keymanager URL to import validator keyshares.
+ --keymanager-auth-token string Authentication bearer token to interact with keymanager API. Don't include the "Bearer" symbol, only include the api-token.
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --shutdown-delay duration Graceful shutdown delay. (default 1s)
+```
+
+## The `run` subcommand
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster-lock.json` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs.
+ --builder-api Enables the builder api. Will only produce builder blocks. Builder API must also be enabled on the validator client. Beacon node must be connected to a builder-relay to access the builder network.
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing.
+ --jaeger-service string Service name used for jaeger tracing. (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --manifest-file string The path to the cluster manifest file. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-manifest.pb")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof). (default "127.0.0.1:3620")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --private-key-file-lock Enables private key locking to prevent multiple instances using the same key.
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-beacon-mock-fuzz Configures simnet beaconmock to return fuzzed responses.
+ --simnet-slot-duration duration Configures slot duration in simnet beacon mock. (default 1s)
+ --simnet-validator-keys-dir string The directory containing the simnet validator key shares. (default ".charon/validator_keys")
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --synthetic-block-proposals Enables additional synthetic block proposal duties. Used for testing of rare duties.
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version in hex of the custom test network.
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API. (default "127.0.0.1:3600")
+```
+
+## The `combine` subcommand
+
+### Combine distributed validator key shares into a single Validator key
+
+The `combine` command combines many validator key shares into a single Ethereum validator key.
+
+```markdown
+charon combine --help
+Combines the private key shares from a threshold of operators in a distributed validator cluster into a set of validator private keys that can be imported into a standard Ethereum validator client.
+
+Warning: running the resulting private keys in a validator alongside the original distributed validator cluster *will* result in slashing.
+
+Usage:
+ charon combine [flags]
+
+Flags:
+ --cluster-dir string Parent directory containing a number of .charon subdirectories from the required threshold of nodes in the cluster. (default ".charon/cluster")
+ --force Overwrites private keys with the same name if present.
+ -h, --help Help for combine
+ --no-verify Disables cluster definition and lock file verification.
+ --output-dir string Directory to output the combined private keys to. (default "./validator_keys")
+```
+
+To run this command, one needs at least a threshold number of node operator's `.charon` directories, which need to be organized into a single folder:
+
+```shell
+tree ./cluster
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+That is, each operator '.charon' directory must be placed in a parent directory, and renamed to avoid conflicts.
+
+If for example the lock file defines 2 validators, each `validator_keys` directory must contain exactly 4 files, a JSON and TXT file for each validator.
+
+Those files must be named with an increasing index associated with the validator in the lock file, starting from 0.
+
+The chosen folder name does not matter, as long as it's different from `.charon`.
+
+At the end of the process `combine` will create a new set of directories containing one validator key each, named after its public key:
+
+```shell
+charon combine --cluster-dir="./cluster" --output-dir="./combined"
+tree ./combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+By default, the `combine` command will refuse to overwrite any private key that is already present in the destination directory.
+
+To force the process, use the `--force` flag.
+
+:::warning
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+**Ensure your distributed validator cluster is completely shut down for at least two epochs before starting a replacement validator or you are likely to be slashed.**
+:::
+
+## Host a relay
+
+Relays run a libp2p [circuit relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) server that allows charon clusters to perform peer discovery and for charon clients behind strict NAT gateways to be communicated with. If you want to self-host a relay for your cluster(s) the following command will start one.
+
+```markdown
+charon relay --help
+Starts a libp2p relay that charon nodes can use to bootstrap their p2p cluster
+
+Usage:
+ charon relay [flags]
+
+Flags:
+ --auto-p2pkey Automatically create a p2pkey (secp256k1 private key used for p2p authentication and ENR) if none found in data directory. (default true)
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for relay
+ --http-address string Listening address (ip and port) for the relay http server serving runtime ENR. (default "127.0.0.1:3640")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --monitoring-address string Listening address (ip and port) for the prometheus and pprof monitoring http server. (default "127.0.0.1:3620")
+ --p2p-advertise-private-addresses Enable advertising of libp2p auto-detected private addresses. This doesn't affect manually provided p2p-external-ip/hostname.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-max-connections int Libp2p maximum number of peers that can connect to this relay. (default 16384)
+ --p2p-max-reservations int Updates max circuit reservations per peer (each valid for 30min) (default 512)
+ --p2p-relay-loglevel string Libp2p circuit relay log level. E.g., debug, info, warn, error.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+```
+You can also consider adding [alternative public relays](../int/faq/risks.md#risk-obol-hosting-the-relay-infrastructure) to your cluster by specifying a list of `p2p-relays` in [`charon run`](#run-the-charon-middleware).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/charon/cluster-configuration.md b/docs/versioned_docs/version-v0.19.0/charon/cluster-configuration.md
new file mode 100644
index 0000000000..d05f53dc3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/charon/cluster-configuration.md
@@ -0,0 +1,161 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+sidebar_position: 3
+---
+
+# Cluster configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client or cluster.
+
+A charon cluster is configured in two steps:
+
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+In the case of a solo operator running a cluster, the [`charon create cluster`](./charon-cli-reference.md#create-a-full-cluster-locally) command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+## Cluster Definition File
+
+The `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+### Using the CLI
+
+The [`charon create dkg`](./charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) command is used to create the `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The schema of the `cluster-definition.json` is defined as:
+
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "creator": {
+ "address": "0x123..abfc", //ETH1 address of the creator
+ "config_signature": "0x123654...abcedf" // EIP712 Signature of config_hash using creator privkey
+ },
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "config_signature": "0x123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "0x123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.2.0", // Schema version
+ "timestamp": "2022-01-01T12:00:00+00:00", // Creation timestamp
+ "num_validators": 2, // Number of distributed validators to be created in cluster-lock.json
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "validators": [
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ },
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ }
+ ],
+ "dkg_algorithm": "foo_dkg_v1", // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "0xabcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "0xabcdef...abcedef" // Final hash of all fields
+}
+```
+
+### Using the DV Launchpad
+
+- A [`leader/creator`](../int/quickstart/group/index.md), that wishes to coordinate the creation of a new Distributed Validator Cluster navigates to the launchpad and selects "Create new Cluster"
+- The `leader/creator` uses the user interface to configure all of the important details about the cluster including:
+ - The `Withdrawal Address` for the created validators
+ - The `Fee Recipient Address` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialized and merklized to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that there is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the `leader/creator` is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralized backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralization of the launchpad.)
+
+## Cluster Lock File
+
+The `cluster-lock.json` has the following schema:
+
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equal to cluster_definition.num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "abc...fed", "cfd...bfe"], // Length equal to cluster_definition.operators
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
+
+## Cluster Size and Resilience
+
+The cluster size (the number of nodes/operators in the cluster) determines the resilience of the cluster; its ability remain operational under diverse failure scenarios.
+Larger clusters can tolerate more faulty nodes.
+However, increased cluster size implies higher operational costs and potential network latency, which may negatively affect performance
+
+Optimal cluster size is therefore trade-off between resilience (larger is better) vs cost-efficiency and performance (smaller is better).
+
+Cluster resilience can be broadly classified into two categories:
+ - **[Byzantine Fault Tolerance (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault)** - the ability to tolerate nodes that are actively trying to disrupt the cluster.
+ - **[Crash Fault Tolerance (CFT)](https://en.wikipedia.org/wiki/Fault_tolerance)** - the ability to tolerate nodes that have crashed or are otherwise unavailable.
+
+Different cluster sizes tolerate different counts of byzantine vs crash nodes.
+In practice, hardware and software crash relatively frequently, while byzantine behaviour is relatively uncommon.
+However, Byzantine Fault Tolerance is crucial for trust minimised systems like distributed validators.
+Thus, cluster size can be chosen to optimise for either BFT or CFT.
+
+The table below lists different cluster sizes and their characteristics:
+ - `Cluster Size` - the number of nodes in the cluster.
+ - `Threshold` - the minimum number of nodes that must collaborate to reach consensus quorum and to create signatures.
+ - `BFT #` - the maximum number of byzantine nodes that can be tolerated.
+ - `CFT #` - the maximum number of crashed nodes that can be tolerated.
+
+| Cluster Size | Threshold | BFT # | CFT # | Note |
+|--------------|-----------|-------|-------|------------------------------------|
+| 1 | 1 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 2 | 2 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 3 | 2 | 0 | 1 | ⚠️ Warning: CFT but not BFT! |
+| 4 | 3 | 1 | 1 | ✅ CFT and BFT optimal for 1 faulty |
+| 5 | 4 | 1 | 1 | |
+| 6 | 4 | 1 | 2 | ✅ CFT optimal for 2 crashed |
+| 7 | 5 | 2 | 2 | ✅ BFT optimal for 2 byzantine |
+| 8 | 6 | 2 | 2 | |
+| 9 | 6 | 2 | 3 | ✅ CFT optimal for 3 crashed |
+| 10 | 7 | 3 | 3 | ✅ BFT optimal for 3 byzantine |
+| 11 | 8 | 3 | 3 | |
+| 12 | 8 | 3 | 4 | ✅ CFT optimal for 4 crashed |
+| 13 | 9 | 4 | 4 | ✅ BFT optimal for 4 byzantine |
+| 14 | 10 | 4 | 4 | |
+| 15 | 10 | 4 | 5 | ✅ CFT optimal for 5 crashed |
+| 16 | 11 | 5 | 5 | ✅ BFT optimal for 5 byzantine |
+| 17 | 12 | 5 | 5 | |
+| 18 | 12 | 5 | 6 | ✅ CFT optimal for 6 crashed |
+| 19 | 13 | 6 | 6 | ✅ BFT optimal for 6 byzantine |
+| 20 | 14 | 6 | 6 | |
+| 21 | 14 | 6 | 7 | ✅ CFT optimal for 7 crashed |
+| 22 | 15 | 7 | 7 | ✅ BFT optimal for 7 byzantine |
+
+The table above is determined by the QBFT consensus algorithm with the
+following formulas from [this](https://arxiv.org/pdf/1909.10194.pdf) paper:
+
+```
+n = cluster size
+
+Threshold: min number of honest nodes required to reach quorum given size n
+Quorum(n) = ceiling(2n/3)
+
+BFT #: max number of faulty (byzantine) nodes given size n
+f(n) = floor((n-1)/3)
+
+CFT #: max number of unavailable (crashed) nodes given size n
+crashed(n) = n - Quorum(n)
+```
diff --git a/docs/versioned_docs/version-v0.19.0/charon/dkg.md b/docs/versioned_docs/version-v0.19.0/charon/dkg.md
new file mode 100644
index 0000000000..ef9cd1ecdf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/charon/dkg.md
@@ -0,0 +1,74 @@
+---
+sidebar_position: 2
+description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key
+ Generation (DKG) Ceremony.
+---
+
+# Distributed Key Generation
+
+## Overview
+
+A [**distributed validator key**](../int/key-concepts.md#distributed-validator-key) is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together (4 randomly chosen points on a graph don't all necessarily sit on the same order three curve). To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a [**distributed key generation ceremony**](../int/key-concepts.md#distributed-validator-key-generation-ceremony).
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/charon/cluster-configuration/README.md).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+* An `Operator` is identified by their Ethereum address. They will sign a message with this address to authorize their charon client to take part in the DKG ceremony.
+* A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p/tree/master/p2p/security/noise). These keys need to be created (and backed up) by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This cluster definition specifies the intended cluster configuration before keys have been created in a distributed key generation ceremony. The `cluster-definition.json` file can be created with the help of the [Distributed Validator Launchpad](cluster-configuration.md#using-the-dv-launchpad) or via the [CLI](cluster-configuration.md#using-the-cli).
+
+## Carrying out the DKG ceremony
+
+Once all participants have signed the cluster definition, they can load the `cluster-definition` file into their charon client, and the client will attempt to complete the DKG.
+
+Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to relays that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen by a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+## Backing up the ceremony artifacts
+
+At the end of a DKG ceremony, each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+* **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+* **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+* **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favor of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+* Do the public key shares combine together to form the group public key?
+ * This can be checked on chain as it does not require a pairing operation
+ * This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+* Do the created BLS public keys attest to their `cluster_definition_hash`?
+ * This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ * If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ * As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+* Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ * VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ * PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ * A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ * Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/charon/cluster-configuration/README.md).
diff --git a/docs/versioned_docs/version-v0.19.0/charon/intro.md b/docs/versioned_docs/version-v0.19.0/charon/intro.md
new file mode 100644
index 0000000000..9c322c0ac8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/charon/intro.md
@@ -0,0 +1,69 @@
+---
+sidebar_position: 1
+description: Charon - The Distributed Validator Client
+---
+
+# Introduction
+
+This section introduces and outlines the Charon _\[kharon]_ middleware, Obol's implementation of DVT. Please see the [key concepts](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/key-concepts/README.md) section as background and context.
+
+## What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and its connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+
+
+## Charon Architecture
+
+Charon is an Ethereum proof of stake distributed validator (DV) client. Like any validator client, its main purpose is to perform validation duties for the Beacon Chain, primarily attestations and block proposals. The beacon client handles a lot of the heavy lifting, leaving the validator client to focus on fetching duty data, signing that data, and submitting it back to the beacon client.
+
+Charon is designed as a generic event-driven workflow with different components coordinating to perform validation duties. All duties follow the same flow, the only difference being the signed data. The workflow can be divided into phases consisting of one or more components:
+
+
+
+### Determine **when** duties need to be performed
+
+The beacon chain is divided into [slots](https://eth2book.info/bellatrix/part3/config/types/#slot) and [epochs](https://eth2book.info/bellatrix/part3/config/types/#epoch), which divides it into deterministically fixed-size time chunks. The first step is to determine when (which slot/epoch) duties need to be performed. This is done by the `scheduler` component. It queries the beacon node to detect which validators defined in the cluster lock are active, and what duties they need to perform for the upcoming epoch and slots. When such a slot starts, the `scheduler` emits an event indicating which validator needs to perform what duty.
+
+### Fetch and come to consensus on **what** data to sign
+
+A DV cluster consists of multiple operators each provided with one of the M-of-N threshold BLS private key shares per validator. The key shares are imported into the validator clients which produce partial signatures. Charon threshold aggregates these partial signatures before broadcasting them to the Beacon Chain. _But to threshold aggregate partial signatures, each validator must sign the same data._ The cluster must therefore coordinate and come to a consensus on what data to sign.
+
+`Fetcher` fetches the unsigned duty data from the beacon node upon receiving an event from `Scheduler`.\
+For attestations, this is the unsigned attestation, for block proposals, this is the unsigned block.
+
+The `Consensus` component listens to events from Fetcher and starts a [QBFT](https://docs.goquorum.consensys.net/configure-and-manage/configure/consensus-protocols/qbft/) consensus game with the other Charon nodes in the cluster for that specific duty and slot. When consensus is reached, the resulting unsigned duty data is stored in the `DutyDB`.
+
+### **Wait** for the VC to sign
+
+Charon is a **middleware** distributed validator client. That means Charon doesn’t have access to the validator private key shares and cannot sign anything on demand. Instead, operators import the key shares into industry-standard validator clients (VC) that are configured to connect to their local Charon client instead of their local Beacon node directly.
+
+Charon, therefore, serves the [Ethereum Beacon Node API](https://ethereum.github.io/beacon-APIs/#/) from the `ValidatorAPI` component and intercepts some endpoints while proxying other endpoints directly to the upstream Beacon node.
+
+The VC queries the `ValidatorAPI` for unsigned data which is retrieved from the `DutyDB`. It then signs it and submits it back to the `ValidatorAPI` which stores it in the `PartialSignatureDB`.
+
+### **Share** partial signatures
+
+The `PartialSignatureDB` stores the partially signed data submitted by the local Charon client’s VC. But it also stores all the partial signatures submitted by the VCs of other peers in the cluster. This is achieved by the `PartialSignatureExchange` component that exchanges partial signatures between all peers in the cluster. All charon clients, therefore, store all partial signatures the cluster generates.
+
+### **Threshold Aggregate** partial signatures
+
+The `SignatureAggregator` is invoked as soon as sufficient (any M of N) partial signatures are stored in the `PartialSignatureDB`. It performs BLS threshold aggregation of the partial signatures resulting in a final signature that is valid for the beacon chain.
+
+### **Broadcast** final signature
+
+Finally, the `Broadcaster` component broadcasts the final threshold aggregated signature to the Beacon client, thereby completing the duty.
+
+### Ports
+
+The following is an outline of the services that can be exposed by charon.
+
+* **:3600** - The validator REST API. This is the port that serves the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/). This is the port validator clients should talk to instead of their standard consensus client REST API port. Charon subsequently proxies these requests to the upstream consensus client specified by `--beacon-node-endpoints`.
+* **:3610** - Charon P2P port. This is the port that charon clients use to communicate with one another via TCP. This endpoint should be port-forwarded on your router and exposed publicly, preferably on a static IP address. This IP address should then be set on the charon run command with `--p2p-external-ip` or `CHARON_P2P_EXTERNAL_IP`.
+* **:3620** - Monitoring port. This port hosts a webserver that serves prometheus metrics on `/metrics`, a readiness endpoint on `/readyz` and a liveness endpoint on `/livez`, and a pprof server on `/debug/pprof`. This port should not be exposed publicly.
+
+## Getting started
+
+For more information on running charon, take a look at our [Quickstart Guides](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.19.0/charon/networking.md b/docs/versioned_docs/version-v0.19.0/charon/networking.md
new file mode 100644
index 0000000000..076981a5c4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/charon/networking.md
@@ -0,0 +1,84 @@
+---
+sidebar_position: 4
+description: Networking
+---
+
+# Charon networking
+
+## Overview
+
+This document describes Charon's networking model which can be divided into two parts: the [_internal validator stack_](networking.md#internal-validator-stack) and the [_external p2p network_](networking.md#external-p2p-network).
+
+## Internal Validator Stack
+
+\
+
+
+Charon is a middleware DVT client and is therefore connected to an upstream beacon node and a downstream validator client is connected to it. Each operator should run the whole validator stack (all 4 client software types), either on the same machine or on different machines. The networking between the nodes should be private and not exposed to the public internet.
+
+Related Charon configuration flags:
+
+* `--beacon-node-endpoints`: Connects Charon to one or more beacon nodes.
+* `--validator-api-address`: Address for Charon to listen on and serve requests from the validator client.
+
+## External P2P Network
+
+ The Charon clients in a DV cluster are connected to each other via a small p2p network consisting of only the clients in the cluster. Peer IP addresses are discovered via an external "relay" server. The p2p connections are over the public internet so the charon p2p port must be publicly accessible. Charon leverages the popular [libp2p](https://libp2p.io/) protocol.
+
+Related [Charon configuration flags](charon-cli-reference.md):
+
+* `--p2p-tcp-addresses`: Addresses for Charon to listen on and serve p2p requests.
+* `--p2p-relays`: Connect charon to one or more relay servers.
+* `--private-key-file`: Private key identifying the charon client.
+
+### LibP2P Authentication and Security
+
+Each charon client has a secp256k1 private key. The associated public key is encoded into the [cluster lock file](cluster-configuration.md#Cluster-Lock-File) to identify the nodes in the cluster. For ease of use and to align with the Ethereum ecosystem, Charon encodes these public keys in the [ENR format](https://eips.ethereum.org/EIPS/eip-778), not in [libp2p’s Peer ID format](https://docs.libp2p.io/concepts/fundamentals/peers/).
+
+:::warning Each Charon node's secp256k1 private key is critical for authentication and must be kept secure to prevent cluster compromise.
+
+Do not use the same key across multiple clusters, as this can lead to security issues.
+
+For more on p2p security, refer to [libp2p's article](https://docs.libp2p.io/concepts/security/security-considerations). :::
+
+Charon currently only supports libp2p tcp connections with [noise](https://noiseprotocol.org/) security and only accepts incoming libp2p connections from peers defined in the cluster lock.
+
+### LibP2P Relays and Peer Discovery
+
+Relays are simple libp2p servers that are publicly accessible supporting the [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) protocol. Circuit-relay is a libp2p transport protocol that routes traffic between two peers over a third-party “relay” peer.
+
+Obol hosts a publicly accessible relay at https://0.relay.obol.tech and will work with other organisations in the community to host alternatives Anyone can host their own relay server for their DV cluster.
+
+Each charon node knows which peers are in the cluster from the ENRs in the cluster lock file, but their IP addresses are unknown. By connecting to the same relay, nodes establish “relay connections” to each other. Once connected via relay they exchange their known public addresses via libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol. The relay connection is then upgraded to a direct connection. If a node’s public IP changes, nodes once again connect via relay, exchange the new IP, and then connect directly once again.
+
+Note that in order for two peers to discover each other, they must connect to the same relay. Cluster operators should therefore coordinate which relays to use.
+
+Libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol attempts to automatically detect the public IP address of a charon client without the need to explicitly configure it. If this however fails, the following two configuration flags can be used to explicitly set the publicly advertised address:
+
+* `--p2p-external-ip`: Explicitly sets the external IP address.
+* `--p2p-external-hostname`: Explicitly sets the external DNS host name.
+
+:::warning If a pair of charon clients are not publicly accessible, due to being behind a NAT, they will not be able to upgrade their relay connections to a direct connection. Even though this is supported, it isn’t recommended as relay connections introduce additional latency and reduced throughput and will result in decreased validator effectiveness and possible missed block proposals and attestations. :::
+
+Libp2p’s circuit-relay connections are end-to-end encrypted, even though relay servers accept connections between nodes from multiple different clusters, relays are merely routing opaque connections. And since Charon only accepts incoming connections from other peers in its cluster, the use of a relay doesn’t allow connections between clusters.
+
+Only the following three libp2p protocols are established between a charon node and a relay itself:
+
+* [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/): To establish relay e2e encrypted connections between two peers in a cluster.\
+
+* [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify): Auto-detection of public IP addresses to share with other peers in the cluster.\
+
+* [peerinfo](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfo.go): Exchanges basic application [metadata](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfopb/v1/peerinfo.proto) for improved operational metrics and observability.\
+
+
+All other charon protocols are only established between nodes in the same cluster.
+
+### Scalable Relay Clusters
+
+In order for a charon client to connect to a relay, it needs the relay's [multiaddr](https://docs.libp2p.io/concepts/fundamentals/addressing/) (containing its public key and IP address). But a single multiaddr can only point to a single relay server which can easily be overloaded if too many clusters connect to it. Charon therefore supports resolving a relay’s multiaddr via HTTP GET request. Since charon also includes the unique `cluster-hash` header in this request, the relay provider can use [consistent header-based load-balancing](https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_steering_header-based_routing) to map clusters to one of many relays using a single HTTP address.
+
+The relay supports serving its runtime public multiaddrs via its `--http-address` flag.
+
+E.g., https://0.relay.obol.tech is actually a load-balancer that routes HTTP requests to one of many relays based on the `cluster-hash` header returning the target relay’s multiaddr which the charon client then uses to connect to that relay.
+
+The charon `--p2p-relays` flag therefore supports both multiaddrs as well as HTTP URls.
diff --git a/docs/versioned_docs/version-v0.19.0/dvl/README.md b/docs/versioned_docs/version-v0.19.0/dvl/README.md
new file mode 100644
index 0000000000..1b694a8473
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/dvl/README.md
@@ -0,0 +1,2 @@
+# dvl
+
diff --git a/docs/versioned_docs/version-v0.19.0/dvl/intro.md b/docs/versioned_docs/version-v0.19.0/dvl/intro.md
new file mode 100644
index 0000000000..73a585e85b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/dvl/intro.md
@@ -0,0 +1,27 @@
+---
+sidebar_position: 1
+description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Introduction
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: [**The DV Launchpad**](https://goerli.launchpad.obol.tech/).
+
+## Getting started
+
+For more information on running charon in a UI friendly way through the DV Launchpad, take a look at our [Quickstart Guides](../int/quickstart/index.md).
+
+## DV Launchpad Links
+
+| Ethereum Network | Launchpad |
+| ---------------- | ----------------------------------- |
+| Mainnet | https://beta.launchpad.obol.tech |
+| Holesky | https://holesky.launchpad.obol.tech |
+| Sepolia | https://sepolia.launchpad.obol.tech |
+| Goerli | https://goerli.launchpad.obol.tech |
diff --git a/docs/versioned_docs/version-v0.19.0/fr/README.md b/docs/versioned_docs/version-v0.19.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.19.0/fr/eth.md b/docs/versioned_docs/version-v0.19.0/fr/eth.md
new file mode 100644
index 0000000000..8bc102205e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/fr/eth.md
@@ -0,0 +1,54 @@
+---
+sidebar_position: 1
+description: Ethereum and its relationship with DVT
+---
+
+# Ethereum and its Relationship with DVT
+
+Our goal for this page is to equip you with the foundational knowledge needed to actively contribute to the advancement of Obol while also directing you to valuable Ethereum and DVT related resources. Additionally, we will shed light on the intersection of DVT and Ethereum, offering curated articles and blog posts to enhance your understanding.
+
+## **Understanding Ethereum**
+
+To grasp the current landscape of Ethereum's PoS development, we encourage you to delve into the wealth of information available on the [Official Ethereum Website.](https://ethereum.org/en/learn/) The Ethereum website serves as a hub for all things Ethereum, catering to individuals at various levels of expertise, whether you're just starting your journey or are an Ethereum veteran. Here, you'll find a trove of resources that cater to diverse learning needs and preferences, ensuring that there's something valuable for everyone in the Ethereum community to discover.
+
+## **DVT & Ethereum**
+
+### Distributed Validator Technology
+
+> "Distributed validator technology (DVT) is an approach to validator security that spreads out key management and signing responsibilities across multiple parties, to reduce single points of failure, and increase validator resiliency.
+>
+> It does this by splitting the private key used to secure a validator across many computers organized into a "cluster". The benefit of this is that it makes it very difficult for attackers to gain access to the key, because it is not stored in full on any single machine. It also allows for some nodes to go offline, as the necessary signing can be done by a subset of the machines in each cluster. This reduces single points of failure from the network and makes the whole validator set more robust." _(ethereum.org, 2023)_
+
+#### Learn More About Distributed Validator technology from [The Official Ethereum Website](https://ethereum.org/en/staking/dvt/)
+
+### How Does DVT Improve Staking on Ethereum?
+
+If you haven’t yet heard, Distributed Validator Technology, or DVT, is the next big thing on The Merge section of the Ethereum roadmap. Learn more about this in our blog post: [What is DVT and How Does It Improve Staking on Ethereum?](https://blog.obol.tech/what-is-dvt-and-how-does-it-improve-staking-on-ethereum/)
+
+
+
+_**Vitalik's Ethereum Roadmap**_
+
+### Deep Dive Into DVT and Charon’s Architecture
+
+Minimizing correlation is vital when designing DVT as Ethereum Proof of Stake is designed to heavily punish correlated behavior. In designing Obol, we’ve made careful choices to create a trust-minimized and non-correlated architecture.
+
+[**Read more about Designing Non-Correlation Here**](https://blog.obol.tech/deep-dive-into-dvt-and-charons-architecture/)
+
+### Performance Testing Distributed Validators
+
+In our mission to help make Ethereum consensus more resilient and decentralised with distributed validators (DVs), it’s critical that we do not compromise on the performance and effectiveness of validators. Earlier this year, we worked with MigaLabs, the blockchain ecosystem observatory located in Barcelona, to perform an independent test to validate the performance of Obol DVs under different configurations and conditions. After taking a few weeks to fully analyse the results together with MigaLabs, we’re happy to share the results of these performance tests.
+
+[**Read More About The Performance Test Results Here**](https://blog.obol.tech/performance-testing-distributed-validators/)
+
+
+
+### More Resources
+
+* [Sorting out Distributed Validator Technology](https://medium.com/nethermind-eth/sorting-out-distributed-validator-technology-a6f8ca1bbce3)
+* [A tour of Verifiable Secret Sharing schemes and Distributed Key Generation protocols](https://medium.com/nethermind-eth/a-tour-of-verifiable-secret-sharing-schemes-and-distributed-key-generation-protocols-3c814e0d47e1)
+* [Threshold Signature Schemes](https://medium.com/nethermind-eth/threshold-signature-schemes-36f40bc42aca)
+
+#### References
+
+* ethereum.org. (2023). Distributed Validator Technology. \[online] Available at: https://ethereum.org/en/staking/dvt/ \[Accessed 25 Sep. 2023].
diff --git a/docs/versioned_docs/version-v0.19.0/fr/testnet.md b/docs/versioned_docs/version-v0.19.0/fr/testnet.md
new file mode 100644
index 0000000000..d533c60095
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/fr/testnet.md
@@ -0,0 +1,122 @@
+---
+sidebar_position: 2
+description: Community testing efforts
+---
+
+# Community Testing
+
+:::tip
+
+This page looks at the community testing efforts organised by Obol to test Distributed Validators at scale. If you are looking for guides to run a Distributed Validator on testnet you can do so [here](../int/quickstart/index.md).
+
+:::
+
+Over the last number of years, Obol Labs has coordinated and hosted a number of progressively larger testnets to help harden the Charon client and iterate on the key generation tooling.
+
+The following is a breakdown of the testnet roadmap, the features that were to be completed by each testnet, and their completion date and duration.
+
+# Testnets
+
+- [x] [Dev Net 1](#devnet-1)
+- [x] [Dev Net 2](#devnet-2)
+- [x] [Athena Public Testnet 1](#athena-public-testnet-1)
+- [x] [Bia Public Testnet 2](#bia-public-testnet-2)
+
+## Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using `docker compose` to spin up 4 Charon clients and 4 different validator clients on a single machine, with a remote consensus client. The keys were created locally in Charon and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+- A single user completes a first tutorial alone, using `docker compose` to spin up 4 Charon clients on a single machine, with a remote consensus client. The keys are created locally in Charon and activated with the existing launchpad.
+- Prove that the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works.
+- Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to Charon running across a network.
+
+## Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows **together** for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+
+This devnet was the first time `charon dkg` was tested with users. A core focus of this devnet was to collect network performance data.
+
+This was also the first time Charon was run in variable, non-virtual networks (i.e. the real internet).
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+- Groups of 4 testers complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients, each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+- Operators avoid exposing Charon to the public internet on a static IP address through the use of Obol-hosted relay nodes.
+- Users test `charon dkg`. The launchpad is not used, and this dkg is triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+- Effective collection of network performance data, to enable gathering even higher signal performance data at scale during public testnets.
+- Block proposals are in place.
+
+## Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 was to get distributed validators into the hands of the wider Obol Community for the first time. The core focus of this testnet was the onboarding experience.
+
+The core output from this testnet was a significant number of public cluster running and public feedback collected.
+
+This was an unincentivized testnet and formed the basis for us to figure out a Sybil resistance mechanism.
+
+**Participants:** Obol Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Completed date:** October 2022
+
+**Duration:** 2 weeks cluster setup, 8 weeks operation
+
+**Goals:**
+
+- Get distributed validators into the hands of the Obol Early Community for the first time.
+- Create the first public onboarding experience and gather feedback. This is the first time we need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) as possible.
+- Make deploying Ethereum validator nodes accessible using the CLI.
+- Generate a backlog of bugs, feature requests, platform requests and integration requests.
+
+## Bia Public Testnet 2
+
+This second public testnet intends to take the learning from Athena and scale the network by engaging both the wider at-home validator community and professional operators. This is the first time users are setting up DVs using the DV launchpad.
+
+This testnet is also important for learning the conditions Charon will be subjected to in production. A core output of this testnet is a large number of autonomous public DV clusters running and building up the Obol community with technical ambassadors.
+
+**Participants:** Obol Community, Ethereum staking community
+
+**State:** MVP
+
+**Network:** Görli
+
+**Completed date:** March 2023
+
+**Duration:** 2 weeks cluster setup, 4-8 weeks operation
+
+**Goals:**
+
+- Engage the wider Solo and Professional Ethereum Staking Community.
+- Get integration feedback.
+- Build confidence in Charon after running DVs on an Ethereum testnet.
+- Learn about the conditions Charon will be subjected to in production.
+- Distributed Validator returns are competitive versus single validator clients.
+- Make deploying Ethereum validator nodes accessible using the DV Launchpad.
+- Build comprehensive guides for various profiles to spin up DVs with minimal supervision from the core team.
diff --git a/docs/versioned_docs/version-v0.19.0/int/README.md b/docs/versioned_docs/version-v0.19.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.19.0/int/faq/README.md b/docs/versioned_docs/version-v0.19.0/int/faq/README.md
new file mode 100644
index 0000000000..456ad9139a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/faq/README.md
@@ -0,0 +1,2 @@
+# faq
+
diff --git a/docs/versioned_docs/version-v0.19.0/int/faq/dkg_failure.md b/docs/versioned_docs/version-v0.19.0/int/faq/dkg_failure.md
new file mode 100644
index 0000000000..33ffe9c496
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/faq/dkg_failure.md
@@ -0,0 +1,82 @@
+---
+sidebar_position: 4
+description: Handling DKG failure
+---
+
+# Handling DKG failure
+
+While the DKG process has been tested and validated against many different configuration instances, it can still encounter issues which might result in failure.
+
+Our DKG is designed in a way that doesn't allow for inconsistent results: either it finishes correctly for every peer, or it fails.
+
+This is a **safety** feature: you don't want to deposit an Ethereum distributed validator that not every operator is able to participate to, or even reach threshold for.
+
+The most common source of issues lies in the network stack: if any of the peer's Internet connection glitches substantially, the DKG will fail.
+
+Charon's DKG doesn't allow peer reconnection once the process is started, but it does allow for re-connections before that.
+
+When you see the following message:
+
+```
+14:08:34.505 INFO dkg Waiting to connect to all peers...
+```
+
+this means your Charon instance is waiting for all the other cluster peers to start their DKG process: at this stage, peers can disconnect and reconnect at will, the DKG process will still continue.
+
+A log line will confirm the connection of a new peer:
+
+```
+14:08:34.523 INFO dkg Connected to peer 1 of 3 {"peer": "fantastic-adult"}
+14:08:34.529 INFO dkg Connected to peer 2 of 3 {"peer": "crazy-bunch"}
+14:08:34.673 INFO dkg Connected to peer 3 of 3 {"peer": "considerate-park"}
+```
+
+As soon as all the peers are connected, this message will be shown:
+
+```
+14:08:34.924 INFO dkg All peers connected, starting DKG ceremony
+```
+
+Past this stage **no disconnections are allowed**, and _all peers must leave their terminals open_ in order for the DKG process to complete: this is a synchronous phase, and every peer is required in order to reach completion.
+
+If for some reason the DKG process fails, you would see error logs that resemble this:
+
+```
+14:28:46.691 ERRO cmd Fatal error: sync step: p2p connection failed, please retry DKG: context canceled
+```
+
+As the error message suggests, the DKG process needs to be retried.
+
+## Cleaning up the `.charon` directory
+
+One cannot simply retry the DKG process: Charon refuses to overwrite any runtime file in order to avoid inconsistencies and private key loss.
+
+When attempting to re-run a DKG with an unclean data directory -- which is either `.charon` or what was specified with the `--data-dir` CLI parameter -- this is the error that will be shown:
+
+```
+14:44:13.448 ERRO cmd Fatal error: data directory not clean, cannot continue {"disallowed_entity": "cluster-lock.json", "data-dir": "/compose/node0"}
+```
+
+The `disallowed_entity` field lists all the files that Charon refuses to overwrite, while `data-dir` is the full path of the runtime directory the DKG process is using.
+
+In order to retry the DKG process one must delete the following entities, if present:
+
+ - `validator_keys` directory
+ - `cluster-lock.json` file
+ - `deposit-data.json` file
+:::warning
+The `charon-enr-private-key` file **must be preserved**, failure to do so requires the DKG process to be restarted from the beginning by creating a new cluster definition.
+:::
+If you're doing a DKG with a custom cluster definition - for example, create with `charon create dkg` rather than the Obol Launchpad - you can re-use the same file.
+
+Once this process has been completed, the cluster operators can retry a DKG.
+
+## Further debugging
+
+If for some reason the DKG process fails again, node operators are adviced to reach out to the Obol team by opening an [issue](https://github.com/ObolNetwork/charon/issues), detailing what troubleshooting steps were taken and providing **debug logs**.
+
+To enable debug logs first clean up the Charon data directory as explained in [the previous paragraph](#cleaning-up-the-charon-directory), then run your DKG command by appending `--log-level=debug` at the end.
+
+In order for the Obol team to debug your issue as quickly and precisely as possible please provide full logs in textual form, not through screenshots or display photos.
+
+Providing complete logs is particularly important, since it allows the team to reconstruct precisely what happened.
diff --git a/docs/versioned_docs/version-v0.19.0/int/faq/general.md b/docs/versioned_docs/version-v0.19.0/int/faq/general.md
new file mode 100644
index 0000000000..74f06d327a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/faq/general.md
@@ -0,0 +1,68 @@
+---
+sidebar_position: 1
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+## General
+
+### Does Obol have a token?
+
+No. Distributed validators use only Ether.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/n6ebKsX46w) too.
+
+### Where does the name Charon come from?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) \[kharon] is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
+
+### What are the hardware requirements for running a Charon node?
+
+Charon alone uses negligible disk space of not more than a few MBs. However, if you are running your consensus client and execution client on the same server as charon, then you will typically need the same hardware as running a full Ethereum node:
+
+At minimum:
+
+* A CPU with 2+ physical cores (or 4 vCPUs)
+* 8GB RAM
+* 1.5TB+ free SSD disk space (for mainnet)
+* 10mb/s internet bandwidth
+
+Recommended specifications:
+
+* A CPU with 4+ physical cores
+* 16GB+ RAM
+* 2TB+ free disk on a high performance SSD (e.g. NVMe)
+* 25mb/s internet bandwidth
+
+For more hardware considerations, check out the [ethereum.org guides](https://ethereum.org/en/developers/docs/nodes-and-clients/run-a-node/#environment-and-hardware) which explores various setups and trade-offs, such as running the node locally or in the cloud.
+
+For now, Geth, Teku & Lighthouse clients are packaged within the docker compose file provided in the [quickstart guides](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/group/README.md), so you don't have to install anything else to run a cluster. Just make sure you give them some time to sync once you start running your node.
+
+### What is the difference between a node, a validator and a cluster?
+
+A node is a single instance of Ethereum EL+CL clients that can communicate with other nodes to maintain the Ethereum blockchain.
+
+A validator is a node that participates in the consensus process by verifying transactions and creating new blocks. Multiple validators can run from the same node.
+
+A cluster is a group of nodes that act together as one or several validators, which allows for a more efficient use of resources, reduces operational costs, and provides better reliability and fault tolerance.
+
+### Can I migrate an existing Charon node to a new machine?
+
+It is possible to migrate your Charon node to another machine running the same config by moving the `.charon` folder with its contents to your new machine. Make sure the EL and CL on the new machine are synced before proceeding to the move to minimize downtime.
+
+## Distributed Key Generation
+
+### What are the min and max numbers of operators for a Distributed Validator?
+
+Currently, the minimum is 4 operators with a threshold of 3.
+
+The threshold (aka quorum) corresponds to the minimum numbers of operators that need to be active for the validator(s) to be able to perform its duties. It is defined by the following formula `n-(ceil(n/3)-1)`. We strongly recommend using this default threshold in your DKG as it maximises liveness while maintaining BFT safety. Setting a 4 out of 4 cluster for example, would make your validator more vulnerable to going offline instead of less vulnerable. You can check the recommended threshold values for a cluster [here](../key-concepts.md#distributed-validator-threshold).
+
+## Debugging Errors in Logs
+
+You can check if the containers on your node are outputting errors by running `docker compose logs` on a machine with a running cluster.
+
+Diagnose some common errors and view their resolutions [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/faq/errors.mdx).
diff --git a/docs/versioned_docs/version-v0.19.0/int/faq/risks.md b/docs/versioned_docs/version-v0.19.0/int/faq/risks.md
new file mode 100644
index 0000000000..eccd9af3bb
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/faq/risks.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 3
+description: Centralization Risks and mitigation
+---
+
+# Centralization risks and mitigation
+
+## Risk: Obol hosting the relay infrastructure
+**Mitigation**: Self-host a relay
+
+One of the risks associated with Obol hosting the [LibP2P relays](../../charon/networking.md) infrastructure allowing peer discovery is that if Obol-hosted relays go down, peers won't be able to discover each other and perform the DKG. To mitigate this risk, external organizations and node operators can consider self-hosting a relay. This way, if Obol's relays go down, the clusters can still operate through other relays in the network. Ensure that all nodes in the cluster use the same relays, or they will not be able to find each other if they are connected to different relays.
+
+The following non-Obol entities run relays that you can consider adding to your cluster (you can have more than one per cluster, see the `--p2p-relays` flag of [`charon run`](../../charon/charon-cli-reference.md#the-run-command)):
+
+| Entity | Relay URL |
+|-----------|---------------------------------------|
+| [DSRV](https://www.dsrvlabs.com/) | https://charon-relay.dsrvlabs.dev |
+| [Infstones](https://infstones.com/) | https://obol-relay.infstones.com:3640/ |
+| [Hashquark](https://www.hashquark.io/) | https://relay-2.prod-relay.721.land/ |
+| [Figment](https://figment.io/) | https://relay-1.obol.figment.io/ |
+
+## Risk: Obol being able to update Charon code
+**Mitigation**: Pin specific docker versions or compile from source on a trusted commit
+
+Another risk associated with Obol is having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) running on the network which could introduce vulnerabilities or malicious code. To mitigate this risk, operators can consider pinning specific versions of the code that have been thoroughly tested and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community.
+
+## Risk: Obol hosting the DV Launchpad
+**Mitigation**: Use [`create cluster`](../../charon/charon-cli-reference.md) or [`create dkg`](../../charon/charon-cli-reference.md) locally and distribute the files manually
+
+Hosting the first Charon frontend, the [DV Launchpad](../../dvl/intro.md), on a centralized server could create a single point of failure, as users would have to rely on Obol's server to access the protocol. This could limit the decentralization of the protocol and could make it vulnerable to attacks or downtime. Obol hosting the launchpad on a decentralized network, such as IPFS is a first step but not enough. This is why the Charon code is open-source and contains a CLI interface to interact with the protocol locally.
+
+To mitigate the risk of launchpad failure, consider using the `create cluster` or `create dkg` commands locally and distributing the key shares files manually.
+
+
+## Risk: Obol going bust/rogue
+**Mitigation**: Use key recovery
+
+The final centralization risk associated with Obol is the possibility of the company going bankrupt or acting maliciously, which would lead to a loss of control over the network and potentially cause damage to the ecosystem. To mitigate this risk, Obol has implemented a key recovery mechanism. This would allow the clusters to continue operating and to retrieve full private keys even if Obol is no longer able to provide support.
+
+A guide to recombine key shares into a single private key can be accessed [here](../quickstart/advanced/quickstart-combine.md).
diff --git a/docs/versioned_docs/version-v0.19.0/int/key-concepts.md b/docs/versioned_docs/version-v0.19.0/int/key-concepts.md
new file mode 100644
index 0000000000..5bc714083f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/key-concepts.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is possible with the use of **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes some of the single points of failure in validation. Should <33% of the participating nodes in a DV cluster go offline, the remaining active nodes can still come to consensus on what to sign and can produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes Geth, Lighthouse, Charon and Teku.
+
+### Execution Client
+
+
+
+An execution client (formerly known as an Eth1 client) specializes in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/charon/intro/README.md).
+
+### Validator Client
+
+
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Threshold
+
+The number of nodes in a cluster that needs to be online and honest for their distributed validators to be online is outlined in the following table.
+
+| Cluster Size | Threshold | Note |
+| :----------: | :-------: | --------------------------------------------- |
+| 4 | 3/4 | Minimum threshold |
+| 5 | 4/5 | |
+| 6 | 4/6 | Minimum to tolerate two offline nodes |
+| 7 | 5/7 | Minimum to tolerate two **malicious** nodes |
+| 8 | 6/8 | |
+| 9 | 6/9 | Minimum to tolerate three offline nodes |
+| 10 | 7/10 | Minimum to tolerate three **malicious** nodes |
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata. Read more about these ceremonies [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/charon/dkg/README.md).
diff --git a/docs/versioned_docs/version-v0.19.0/int/overview.md b/docs/versioned_docs/version-v0.19.0/int/overview.md
new file mode 100644
index 0000000000..9fbb40b434
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mitigate these risks, community building and credible neutrality must be used as primary design principles.
+
+Obol is focused on scaling consensus by providing permissionless access to Distributed Validators (DVs). We believe that distributed validators will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption, the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing infrastructure.
+
+Similar to how roll-up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking middlewares that can be adopted at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/dvl/intro/README.md), a [User Interface](https://goerli.launchpad.obol.tech/) for bootstrapping Distributed Validators
+* [Charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/charon/intro/README.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Splits](../sc/introducing-obol-splits.md), a set of solidity smart contracts for the distribution of rewards from Distributed Validators
+* [Obol Testnets](../fr/testnet.md), distributed valdiator infrastructure for Ethereum public test networks, to enable any sized operator to test their deployment before running Distributed Validators on mainnet.
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+### The Vision
+
+The road to decentralizing stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+
+
+The first version of distributed validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivization is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivization scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivization alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivization layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the delineation between the two.
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/README.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/README.md
new file mode 100644
index 0000000000..bd2483c7cf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/README.md
@@ -0,0 +1,2 @@
+# quickstart
+
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/activate-dv.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/activate-dv.md
new file mode 100644
index 0000000000..44fad69b6b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/activate-dv.md
@@ -0,0 +1,54 @@
+---
+sidebar_position: 4
+description: Activate the Distributed Validator using the deposit contract
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Activate a DV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+If you have successfully created a distributed validator and you are ready to activate it, congratulations! 🎉
+
+Once you have connected all of your charon clients together, synced all of your ethereum nodes such that the monitoring indicates that they are all healthy and ready to operate, **ONE operator** may proceed to deposit and activate the validator(s).
+
+The `deposit-data.json` to be used to deposit will be located in each operator's `.charon` folder. The copies across every node should be identical and any of them can be uploaded.
+
+:::warning
+If you are being given a `deposit-data.json` file that you didn't generate yourself, please take extreme care to ensure this operator has not given you a malicious `deposit-data.json` file that is not the one you expect. Cross reference the files from multiple operators if there is any doubt. Activating the wrong validator or an invalid deposit could result in complete theft or loss of funds.
+:::
+
+Use any of the following tools to deposit. Please use the third-party tools at your own risk and always double check the staking contract address.
+
+
+
+
+
+
+
+ - Obol Distributed Validator Launchpad
+ - ethereum.org Staking Launchpad
+ - From a SAFE Multisig (Repeat these steps for every validator to deposit in your cluster)
+
+ - From the SAFE UI, click on
New Transaction
then Transaction Builder
to create a new custom transaction
+ - Enter the beacon chain contract for Deposit on mainnet - you can find it here
+ - Fill the transaction information
+ - Set amount to
32
in ETH
+ - Use your
deposit-data.json
to fill the required data : pubkey
,withdrawal credentials
,signature
,deposit_data_root
. Make sure to prefix the input with 0x
to format them in bytes
+ - Click on
Add transaction
+ - Click on
Create Batch
+ - Click on
Send Batch
, you can click on Simulate to check if the transaction will execute successfully
+ - Get the minimum threshold of signatures from the other addresses and execute the custom transaction
+
+
+
+
+
+The activation process can take a minimum of 16 hours, with the maximum time to activation being dictated by the length of the activation queue, which can be weeks.
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/README.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/README.md
new file mode 100644
index 0000000000..965416d689
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/README.md
@@ -0,0 +1,2 @@
+# advanced
+
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/adv-docker-configs.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/adv-docker-configs.md
new file mode 100644
index 0000000000..d14de53e8b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/adv-docker-configs.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 8
+description: Use advanced docker-compose features to have more flexibility and power to change the default configuration.
+---
+
+# Advanced Docker Configs
+
+:::info
+This section is intended for *docker power users*, i.e., for those who are familiar with working with `docker-compose` and want to have more flexibility and power to change the default configuration.
+:::
+
+We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in `docker-compose.yml` without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
+See [this](https://docs.docker.com/compose/extends/#multiple-compose-files) for more details.
+
+There are some additional compose files in [this repository](https://github.com/ObolNetwork/charon-distributed-validator-node/), `compose-debug.yml` and `docker-compose.override.yml.sample`, along-with the default `docker-compose.yml` file that you can use for this purpose.
+
+- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f compose-debug.yml up
+```
+
+- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
+
+- To use it, just copy the sample file to `docker-compose.override.yml` and customise it to your liking. Please create this file ONLY when you want to tweak something. This is because the default override file is empty and docker errors if you provide an empty compose file.
+
+```
+cp docker-compose.override.yml.sample docker-compose.override.yml
+
+# Tweak docker-compose.override.yml and then run docker compose up
+docker compose up
+```
+
+- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
+```
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/monitoring.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/monitoring.md
new file mode 100644
index 0000000000..fdbec169b9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/monitoring.md
@@ -0,0 +1,100 @@
+---
+sidebar_position: 4
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+# Getting Started Monitoring your Node
+
+Welcome to this comprehensive guide, designed to assist you in effectively monitoring your Charon cluster and nodes, and setting up alerts based on specified parameters.
+
+## Pre-requisites
+
+Ensure the following software are installed:
+
+- Docker: Find the installation guide for Ubuntu **[here](https://docs.docker.com/engine/install/ubuntu/)**
+- Prometheus: You can install it using the guide available **[here](https://prometheus.io/docs/prometheus/latest/installation/)**
+- Grafana: Follow this **[link](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)** to install Grafana
+
+## Import Pre-Configured Charon Dashboards
+
+- Navigate to the **[repository](https://github.com/ObolNetwork/monitoring/tree/main/dashboards)** that contains a variety of Grafana dashboards. For this demonstration, we will utilize the Charon Dashboard json.
+
+- In your Grafana interface, create a new dashboard and select the import option.
+
+- Copy the content of the Charon Dashboard json from the repository and paste it into the import box in Grafana. Click "Load" to proceed.
+
+- Finalize the import by clicking on the "Import" button. At this point, your dashboard should begin displaying metrics. Ensure your Charon client and Prometheus are operational for this to occur.
+
+## Example Alerting Rules
+
+To create alerts for Node-Exporter, follow these steps based on the sample rules provided on the "Awesome Prometheus alerts" page:
+
+1. Visit the **[Awesome Prometheus alerts](https://samber.github.io/awesome-prometheus-alerts/rules.html#host-and-hardware)** page. Here, you will find lists of Prometheus alerting rules categorized by hardware, system, and services.
+
+2. Depending on your need, select the category of alerts. For example, if you want to set up alerts for your system's CPU usage, click on the 'CPU' under the 'Host & Hardware' category.
+
+3. On the selected page, you'll find specific alert rules like 'High CPU Usage'. Each rule will provide the PromQL expression, alert name, and a brief description of what the alert does. You can copy these rules.
+
+4. Paste the copied rules into your Prometheus configuration file under the `rules` section. Make sure you understand each rule before adding it to avoid unnecessary alerts.
+
+5. Finally, save and apply the configuration file. Prometheus should now trigger alerts based on these rules.
+
+
+For alerts specific to Charon/Alpha, refer to the alerting rules available on this [ObolNetwork/monitoring](https://github.com/ObolNetwork/monitoring/tree/main/alerting-rules).
+
+## Understanding Alert Rules
+
+1. `ClusterBeaconNodeDown`This alert is activated when the beacon node in a specified Alpha cluster is offline. The beacon node is crucial for validating transactions and producing new blocks. Its unavailability could disrupt the overall functionality of the cluster.
+2. `ClusterBeaconNodeSyncing`This alert indicates that the beacon node in a specified Alpha cluster is synchronizing, i.e., catching up with the latest blocks in the cluster.
+3. `ClusterNodeDown`This alert is activated when a node in a specified Alpha cluster is offline.
+4. `ClusterMissedAttestations`:This alert indicates that there have been missed attestations in a specified Alpha cluster. Missed attestations may suggest that validators are not operating correctly, compromising the security and efficiency of the cluster.
+5. `ClusterInUnknownStatus`: This alert is designed to activate when a node within the cluster is detected to be in an unknown state. The condition is evaluated by checking whether the maximum of the app_monitoring_readyz metric is 0.
+6. `ClusterInsufficientPeers`:This alert is set to activate when the number of peers for a node in the Alpha M1 Cluster #1 is insufficient. The condition is evaluated by checking whether the maximum of the **`app_monitoring_readyz`** equals 4.
+7. `ClusterFailureRate`: This alert is activated when the failure rate of the Alpha M1 Cluster #1 exceeds a certain threshold.
+8. `ClusterVCMissingValidators`: This alert is activated if any validators in the Alpha M1 Cluster #1 are missing.
+9. `ClusterHighPctFailedSyncMsgDuty`: This alert is activated if a high percentage of sync message duties failed in the cluster. The alert is activated if the sum of the increase in failed duties tagged with "sync_message" in the last hour divided by the sum of the increase in total duties tagged with "sync_message" in the last hour is greater than 0.1.
+10. `ClusterNumConnectedRelays`: This alert is activated if the number of connected relays in the cluster falls to 0.
+11. PeerPingLatency: 1. This alert is activated if the 90th percentile of the ping latency to the peers in a cluster exceeds 500ms within 2 minutes.
+
+## Best Practices for Monitoring Charon Nodes & Cluster
+
+- **Establish Baselines**: Familiarize yourself with the normal operation metrics like CPU, memory, and network usage. This will help you detect anomalies.
+- **Define Key Metrics**: Set up alerts for essential metrics, encompassing both system-level and Charon-specific ones.
+- **Configure Alerts**: Based on these metrics, set up actionable alerts.
+- **Monitor Network**: Regularly assess the connectivity between nodes and the network.
+- **Perform Regular Health Checks**: Consistently evaluate the status of your nodes and clusters.
+- **Monitor System Logs**: Keep an eye on logs for error messages or unusual activities.
+- **Assess Resource Usage**: Ensure your nodes are neither over- nor under-utilized.
+- **Automate Monitoring**: Use automation to ensure no issues go undetected.
+- **Conduct Drills**: Regularly simulate failure scenarios to fine-tune your setup.
+- **Update Regularly**: Keep your nodes and clusters updated with the latest software versions.
+
+## Third-Party Services for Uptime Testing
+
+- [updown.io](https://updown.io/)
+- [Grafana synthetic Monitoring](https://grafana.com/grafana/plugins/grafana-synthetic-monitoring-app/)
+
+## Key metrics to watch to verify node health based on jobs
+
+- Node Exporter:
+
+**CPU Usage**: High or spiking CPU usage can be a sign of a process demanding more resources than it should.
+
+**Memory Usage**: If a node is consistently running out of memory, it could be due to a memory leak or simply under-provisioning.
+
+**Disk I/O**: Slow disk operations can cause applications to hang or delay responses. High disk I/O can indicate storage performance issues or a sign of high load on the system.
+
+**Network Usage**: High network traffic or packet loss can signal network configuration issues, or that a service is being overwhelmed by requests.
+
+**Disk Space**: Running out of disk space can lead to application errors and data loss.
+
+**Uptime**: The amount of time a system has been up without any restarts. Frequent restarts can indicate instability in the system.
+
+**Error Rates**: The number of errors encountered by your application. This could be 4xx/5xx HTTP errors, exceptions, or any other kind of error your application may log.
+
+**Latency**: The delay before a transfer of data begins following an instruction for its transfer.
+
+It is also important to check:
+
+- NTP clock skew
+- Process restarts and failures (eg. through `node_systemd`)
+- alert on high error and panic log counts.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/obol-monitoring.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/obol-monitoring.md
new file mode 100644
index 0000000000..8d9e0ceca1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/obol-monitoring.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 5
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+
+# Push Metrics to Obol Monitoring
+
+:::info
+This is **optional** and does not confer any special privileges within the Obol Network.
+:::
+
+You may have been provided with **Monitoring Credentials** used to push distributed validator metrics to Obol's central prometheus cluster to monitor, analyze, and improve your Distributed Validator Cluster's performance.
+
+The provided credentials needs to be added in `prometheus/prometheus.yml` replacing `$PROM_REMOTE_WRITE_TOKEN` and will look like:
+```
+obol20!tnt8U!C...
+```
+
+The updated `prometheus/prometheus.yml` file should look like:
+```
+global:
+ scrape_interval: 30s # Set the scrape interval to every 30 seconds.
+ evaluation_interval: 30s # Evaluate rules every 30 seconds.
+
+remote_write:
+ - url: https://vm.monitoring.gcp.obol.tech/write
+ authorization:
+ credentials: obol20!tnt8U!C...
+
+scrape_configs:
+ - job_name: 'charon'
+ static_configs:
+ - targets: ['charon:3620']
+ - job_name: "lodestar"
+ static_configs:
+ - targets: [ "lodestar:5064" ]
+ - job_name: 'node-exporter'
+ static_configs:
+ - targets: ['node-exporter:9100']
+```
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-builder-api.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-builder-api.md
new file mode 100644
index 0000000000..b6af1be01f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-builder-api.md
@@ -0,0 +1,163 @@
+---
+sidebar_position: 2
+description: Run a distributed validator cluster with the builder API (MEV-Boost)
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Run a cluster with MEV enabled
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+This quickstart guide focuses on configuring the builder API for Charon and supported validator and consensus clients.
+
+## Getting started with Charon & the Builder API
+
+Running a distributed validator cluster with the builder API enabled will give the validators in the cluster access to the builder network. This builder network is a network of "Block Builders"
+who work with MEV searchers to produce the most valuable blocks a validator can propose.
+
+[MEV-Boost](https://boost.flashbots.net/) is one such product from flashbots that enables you to ask multiple
+block relays (who communicate with the "Block Builders") for blocks to propose. The block that pays the largest reward to the validator will be signed and returned to the relay for broadcasting to the wider
+network. The end result for the validator is generally an increased APR as they receive some share of the MEV.
+
+:::info
+Before completing this guide, please check your cluster version, which can be found inside the `cluster-lock.json` file. If you are using cluster-lock version `1.7.0` or higher release verions, Obol seamlessly accommodates all validator client implementations within a mev-enabled distributed validator cluster.
+
+For clusters with a cluster-lock version `1.6.0` and below, charon is compatible only with [Teku](https://github.com/ConsenSys/teku). Use the version history feature of this documentation to see the instructions for configuring a cluster in that manner (`v0.16.0`).
+:::
+
+## Client configuration
+
+:::note
+You need to add CLI flags to your consensus client, charon client, and validator client, to enable the builder API.
+
+You need all operators in the cluster to have their nodes properly configured to use the builder API, or you risk missing a proposal.
+:::
+
+### Charon
+
+Charon supports builder API with the `--builder-api` flag. To use builder API, one simply needs to add this flag to the `charon run` command:
+
+```
+charon run --builder-api
+```
+
+### Consensus Clients
+
+The following flags need to be configured on your chosen consensus client. A flashbots relay URL is provided for example purposes, you should choose a relay that suits your preferences from [this list](https://github.com/eth-educators/ethstaker-guides/blob/main/MEV-relay-list.md#mev-relay-list-for-mainnet).
+
+
+
+ Teku can communicate with a single relay directly:
+
+
+ {String.raw`--builder-endpoint="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`--builder-endpoint=http://mev-boost:18550`}
+
+
+
+
+ Lighthouse can communicate with a single relay directly:
+
+
+ {String.raw`lighthouse bn --builder "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`lighthouse bn --builder "http://mev-boost:18550"`}
+
+
+
+
+
+
+ {String.raw`prysm beacon-chain --http-mev-relay "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+
+ {String.raw`--payload-builder=true --payload-builder-url="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ You should also consider adding --local-block-value-boost 3
as a flag, to favour locally built blocks if they are withing 3% in value of the relay block, to improve the chances of a successful proposal.
+
+
+
+
+ {String.raw`--builder --builder.urls "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+### Validator Clients
+
+The following flags need to be configured on your chosen validator client
+
+
+
+
+
+ {String.raw`teku validator-client --validators-builder-registration-default-enabled=true`}
+
+
+
+
+
+
+
+ {String.raw`lighthouse vc --builder-proposals`}
+
+
+
+
+
+
+ {String.raw`prysm validator --enable-builder`}
+
+
+
+
+
+
+ {String.raw`--payload-builder=true`}
+
+
+
+
+
+
+ {String.raw`--builder="true" --builder.selection="builderonly"`}
+
+
+
+
+
+## Verify your cluster is correctly configured
+
+It can be difficult to confirm everything is configured correctly with your cluster until a proposal opportunity arrives, but here are some things you can check.
+
+When your cluster is running, you should see if charon is logging something like this each epoch:
+```
+13:10:47.094 INFO bcast Successfully submitted validator registration to beacon node {"delay": "24913h10m12.094667699s", "pubkey": "84b_713", "duty": "1/builder_registration"}
+```
+
+This indicates that your charon node is successfully registering with the relay for a blinded block when the time comes.
+
+If you are using the [ultrasound relay](https://relay.ultrasound.money), you can enter your cluster's distributed validator public key(s) into their website, to confirm they also see the validator as correctly registered.
+
+You should check that your validator client's logs look healthy, and ensure that you haven't added a `fee-recipient` address that conflicts with what has been selected by your cluster in your cluster-lock file, as that may prevent your validator from producing a signature for the block when the opportunity arises. You should also confirm the same for all of the other peers in your cluster.
+
+Once a proposal has been made, you should look at the `Block Extra Data` field under `Execution Payload` for the block on [Beaconcha.in](https://beaconcha.in/block/18450364), and confirm there is text present, this generally suggests the block came from a builder, and was not a locally constructed block.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-combine.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-combine.md
new file mode 100644
index 0000000000..c7ce44360f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-combine.md
@@ -0,0 +1,112 @@
+---
+sidebar_position: 9
+description: Combine distributed validator private key shares to recover the validator private key.
+---
+
+# Combine DV private key shares
+
+:::warning
+Reconstituting Distributed Validator private key shares into a standard validator private key is a security risk, and can potentially cause your validator to be slashed.
+
+Only combine private keys as a last resort and do so with extreme caution.
+:::
+
+Combine distributed validator private key shares into an Ethereum validator private key.
+
+## Pre-requisites
+
+- Ensure you have the `.charon` directories of at least a threshold of the cluster's node operators.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Set up the key combination directory tree
+
+Rename each cluster node operator `.charon` directory in a different way to avoid folder name conflicts.
+
+We suggest naming them clearly and distinctly, to avoid confusion.
+
+At the end of this process, you should have a tree like this:
+
+```shell
+$ tree ./cluster
+
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+...
+└── node*
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+:::warning
+Make sure to never mix the various `.charon` directories with one another.
+
+Doing so can potentially cause the combination process to fail.
+:::
+
+## Step 2. Combine the key shares
+
+Run the following command:
+
+```sh
+# Combine a clusters private keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.0 combine --cluster-dir /opt/charon/cluster --output-dir /opt/charon/combined
+```
+
+This command will store the combined keys in the `output-dir`, in this case a folder named `combined`.
+
+```shell
+$ tree combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+
+We can verify that the directory names are correct by looking at the lock file:
+
+```shell
+$ jq .distributed_validators[].distributed_public_key cluster/node0/cluster-lock.json
+"0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd"
+"0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106"
+```
+
+:::info
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+Ensure your distributed validator cluster is completely shut down before starting a replacement validator or you are likely to be slashed.
+:::
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-sdk.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-sdk.md
new file mode 100644
index 0000000000..6573d3ecd1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-sdk.md
@@ -0,0 +1,133 @@
+---
+sidebar_position: 1
+description: Create a DV cluster using the Obol Typescript SDK
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Create a DV using the SDK
+
+:::warning
+The Obol-SDK is in a beta state and should be used with caution on testnets only.
+:::
+
+This is a walkthrough of using the [Obol-SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk) to propose a four-node distributed validator cluster for creation using the [DV Launchpad](../../../dvl/intro.md).
+
+## Pre-requisites
+
+- You have [node.js](https://nodejs.org/en) installed.
+
+## Install the package
+
+Install the Obol-SDK package into your development environment
+
+
+
+
+ npm install --save @obolnetwork/obol-sdk
+
+
+
+
+ yarn add @obolnetwork/obol-sdk
+
+
+
+
+## Instantiate the client
+
+The first thing you need to do is create a instance of the Obol SDK client. The client takes two constructor parameters:
+
+- The `chainID` for the chain you intend to use.
+- An ethers.js [signer](https://docs.ethers.org/v6/api/providers/#Signer-signTypedData) object.
+
+```ts
+import { Client } from "@obolnetwork/obol-sdk";
+import { ethers } from "ethers";
+
+// Create a dummy ethers signer object with a throwaway private key
+const mnemonic = ethers.Wallet.createRandom().mnemonic?.phrase || "";
+const privateKey = ethers.Wallet.fromPhrase(mnemonic).privateKey;
+const wallet = new ethers.Wallet(privateKey);
+const signer = wallet.connect(null);
+
+// Instantiate the Obol Client for goerli
+const obol = new Client({ chainId: 5 }, signer);
+```
+
+## Propose the cluster
+
+List the Ethereum addresses of participating operators, along with withdrawal and fee recipient address data for each validator you intend for the operators to create.
+
+```ts
+// A config hash is a deterministic hash of the proposed DV cluster configuration
+const configHash = await obol.createClusterDefinition({
+ name: "SDK Demo Cluster",
+ operators: [
+ { address: "0xC35CfCd67b9C27345a54EDEcC1033F2284148c81" },
+ { address: "0x33807D6F1DCe44b9C599fFE03640762A6F08C496" },
+ { address: "0xc6e76F72Ea672FAe05C357157CfC37720F0aF26f" },
+ { address: "0x86B8145c98e5BD25BA722645b15eD65f024a87EC" },
+ ],
+ validators: [
+ {
+ fee_recipient_address: "0x3CD4958e76C317abcEA19faDd076348808424F99",
+ withdrawal_address: "0xE0C5ceA4D3869F156717C66E188Ae81C80914a6e",
+ },
+ ],
+});
+
+console.log(
+ `Direct the operators to https://goerli.launchpad.obol.tech/dv?configHash=${configHash} to complete the key generation process`
+);
+```
+
+## Invite the Operators to complete the DKG
+
+Once the Obol-API returns a `configHash` string from the `createClusterDefinition` method, you can use this identifier to invite the operators to the [Launchpad](../../../dvl/intro.md) to complete the process
+
+1. Operators navigate to `https://.launchpad.obol.tech/dv?configHash=` and complete the [run a DV with others](../group/quickstart-group-operator.md) flow.
+1. Once the DKG is complete, and operators are using the `--publish` flag, the created cluster details will be posted to the Obol API
+1. The creator will be able to retrieve this data with `obol.getClusterLock(configHash)`, to use for activating the newly created validator.
+
+## Retrieve the created Distributed Validators using the SDK
+
+Once the DKG is complete, the proposer of the cluster can retrieve key data such as the validator public keys and their associated deposit data messages.
+
+```js
+const clusterLock = await obol.getClusterLock(configHash);
+```
+
+Reference lock files can be found [here](https://github.com/ObolNetwork/charon/tree/main/cluster/testdata).
+
+## Activate the DVs using the deposit contract
+
+In order to activate the distributed validators, the cluster operator can retrieve the validators' associated deposit data from the lock file and use it to craft transactions to the `deposit()` method on the deposit contract.
+
+```js
+const validatorDepositData =
+ clusterLock.distributed_validators[validatorIndex].deposit_data;
+
+const depositContract = new ethers.Contract(
+ DEPOSIT_CONTRACT_ADDRESS, // 0x00000000219ab540356cBB839Cbe05303d7705Fa for Mainnet, 0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b for Goerli
+ depositContractABI, // https://etherscan.io/address/0x00000000219ab540356cBB839Cbe05303d7705Fa#code for Mainnet, and replace the address for Goerli
+ signer
+);
+
+const TX_VALUE = ethers.parseEther("32");
+
+const tx = await depositContract.deposit(
+ validatorDepositData.pubkey,
+ validatorDepositData.withdrawal_credentials,
+ validatorDepositData.signature,
+ validatorDepositData.deposit_data_root,
+ { value: TX_VALUE }
+);
+
+const txResult = await tx.wait();
+```
+
+## Usage Examples
+
+Examples of how our SDK can be used are found [here](https://github.com/ObolNetwork/obol-sdk-examples).
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-split.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-split.md
new file mode 100644
index 0000000000..92330336f8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/quickstart-split.md
@@ -0,0 +1,89 @@
+---
+sidebar_position: 3
+description: Split existing validator keys
+---
+
+# Split existing validator private keys
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+
+This process should only be used if you want to split an _existing validator private key_ into multiple private key shares for use in a Distributed Validator Cluster. If your existing validator is not properly shut down before the Distributed Validator starts, your validator may be slashed.
+
+If you are starting a new validator, you should follow a [quickstart guide](../index.md) instead. :::
+
+Split an existing Ethereum validator key into multiple key shares for use in an [Obol Distributed Validator Cluster](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/key-concepts/README.md#distributed-validator-cluster).
+
+## Pre-requisites
+
+* Ensure you have the existing validator keystores (the ones to split) and passwords.
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Make sure `docker` is running before executing the commands below.
+
+## Step 1. Clone the charon repo and copy existing keystore files
+
+Clone the [charon](https://github.com/ObolNetwork/charon) repo.
+
+```sh
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon.git
+
+# Change directory
+cd charon/
+
+# Create a folder within this checked out repo
+mkdir split_keys
+```
+
+Copy the existing validator `keystore.json` files into this new folder. Alongside them, with a matching filename but ending with `.txt` should be the password to the keystore. E.g., `keystore-0.json` `keystore-0.txt`
+
+At the end of this process, you should have a tree like this:
+
+```shell
+├── split_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ ├── keystore-1.txt
+│ ...
+│ ├── keystore-*.json
+│ ├── keystore-*.txt
+```
+
+## Step 2. Split the keys using the charon docker command
+
+Run the following docker command to split the keys:
+
+```shell
+CHARON_VERSION= # E.g. v0.19.0
+CLUSTER_NAME= # The name of the cluster you want to create.
+WITHDRAWAL_ADDRESS= # The address you want to use for withdrawals.
+FEE_RECIPIENT_ADDRESS= # The address you want to use for fee payments.
+NODES= # The number of nodes in the cluster.
+
+docker run --rm -v $(pwd):/opt/charon obolnetwork/charon:${CHARON_VERSION} create cluster --name="${CLUSTER_NAME}" --withdrawal-addresses="${WITHDRAWAL_ADDRESS}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDRESS}" --split-existing-keys --split-keys-dir=/opt/charon/split_keys --nodes ${NODES} --network goerli
+```
+
+The above command will create `validator_keys` along with `cluster-lock.json` in `./.charon/cluster` for each node.
+
+Command output:
+
+```shell
+***************** WARNING: Splitting keys **********************
+ Please make sure any existing validator has been shut down for
+ at least 2 finalised epochs before starting the charon cluster,
+ otherwise slashing could occur.
+****************************************************************
+
+Created charon cluster:
+ --split-existing-keys=true
+
+.charon/cluster/
+├─ node[0-*]/ Directory for each node
+│ ├─ charon-enr-private-key Charon networking private key for node authentication
+│ ├─ cluster-lock.json Cluster lock defines the cluster lock file which is signed by all nodes
+│ ├─ validator_keys Validator keystores and password
+│ │ ├─ keystore-*.json Validator private share key for duty signing
+│ │ ├─ keystore-*.txt Keystore password files for keystore-*.json
+```
+
+These split keys can now be used to start a charon cluster.
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/self-relay.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/self-relay.md
new file mode 100644
index 0000000000..ab18fc122c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/advanced/self-relay.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 7
+description: Self-host a relay
+---
+
+# Self-Host a Relay
+
+If you are experiencing connectivity issues with the Obol hosted relays, or you want to improve your clusters latency and decentralization, you can opt to host your own relay on a separate open and static internet port.
+
+```
+# Figure out your public IP
+curl v4.ident.me
+
+# Clone the repo and cd into it.
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+cd charon-distributed-validator-node
+
+# Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname # Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname
+
+nano relay/docker-compose.yml
+
+docker compose -f relay/docker-compose.yml up
+```
+
+Test whether the relay is publicly accessible. This should return an ENR:
+`curl http://replace.with.public.ip.or.hostname:3640/enr`
+
+Ensure the ENR returned by the relay contains the correct public IP and port by decoding it with https://enr-viewer.com/.
+
+Configure **ALL** charon nodes in your cluster to use this relay:
+
+- Either by adding a flag: `--p2p-relays=http://replace.with.public.ip.or.hostname:3640/enr`
+- Or by setting the environment variable: `CHARON_P2P_RELAYS=http://replace.with.public.ip.or.hostname:3640/enr`
+
+Note that a local `relay/.charon/charon-enr-private-key` file will be created next to `relay/docker-compose.yml` to ensure a persisted relay ENR across restarts.
+
+A list of publicly available relays that can be used is maintained [here](../../faq/risks.md#risk-obol-hosting-the-relay-infrastructure).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/README.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/README.md
new file mode 100644
index 0000000000..f7eb065fd3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/README.md
@@ -0,0 +1,2 @@
+# alone
+
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/create-keys.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/create-keys.md
new file mode 100644
index 0000000000..29df885895
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/create-keys.md
@@ -0,0 +1,58 @@
+---
+sidebar_position: 2
+description: Run all nodes in a distributed validator cluster
+---
+
+# create-keys
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Create the private key shares
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+:::info Running a Distributed Validator alone means that a single operator manages all of the nodes of the DV. Depending on the operators security preferences, the private key shares can be created centrally, and distributed securely to each node. This is the focus of the below guide.
+
+Alternatively, the private key shares can be created in a lower-trust manner with a [Distributed Key Generation](../../key-concepts.md#distributed-validator-key-generation-ceremony) process, which avoids the validator private key being stored in full anywhere, at any point in its lifecycle. Follow the [group quickstart](../group/index.md) instead for this latter case. :::
+
+### Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Make sure `docker` is running before executing the commands below.
+
+### Create the key shares locally
+
+Create the artifacts needed to run a DV cluster by running the following command to setup the inputs for the DV. Check the [Charon CLI reference](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/charon/charon-cli-reference/README.md) for additional optional flags to set.\
+\
+
+
+```
+
+ WITHDRAWAL_ADDR=[ENTER YOUR WITHDRAWAL ADDRESS HERE]
+
+
+ FEE_RECIPIENT_ADDR=[ENTER YOUR FEE RECIPIENT ADDRESS HERE]
+
+
+ NB_NODES=[ENTER AMOUNT OF DESIRED NODES]
+
+
+ NETWORK="goerli"
+
+
+```
+
+Then, run this command to create all the key shares and cluster artifacts locally:\
+\
+
+
+```
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.0 create cluster --name="Quickstart Cluster" --withdrawal-addresses="{'\$\{WITHDRAWAL_ADDR\}'}" --fee-recipient-addresses="{'\$\{FEE_RECIPIENT_ADDR\}'}" --nodes="{'\$\{NB_NODES\}'}" --network="{'\$\{NETWORK\}'}" --num-validators=1 --cluster-dir="cluster"
+
+```
+
+Go to the [Obol Goerli DV Launchpad](https://goerli.launchpad.obol.tech) and select `Create a distributed validator alone`. Follow the steps to configure your DV cluster.
+
+After successful completion, a subdirectory `cluster/` should be created. In it are as many folders as nodes of the cluster. Each folder contains charon artifacts and partial private keys needed for each node of the cluster.
+
+Once you have made a backup of the `cluster/` folder, you can move to [deploying this cluster physically](deploy.md).
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/deploy.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/deploy.md
new file mode 100644
index 0000000000..3438d40d31
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/deploy.md
@@ -0,0 +1,59 @@
+---
+sidebar_position: 3
+description: Move the private key shares to the nodes and run the cluster
+---
+
+# Deploy the cluster
+
+To distribute your cluster physically and start the DV, each node in the cluster needs one of the folders called `node*/` within the output of the `create cluster` command. These folders should be copied to a CDVN repo, and the folder renamed from `node0/` to `.charon/`. (Or you can override `charon run`'s default file locations)
+
+```log
+
+cluster
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ └── keystore-0.txt
+
+```
+
+```log
+└── .charon
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── ...
+ ├── keystore-N.json
+ └── keystore-N.txt
+```
+
+:point\_right: Use the single node [docker compose](https://github.com/ObolNetwork/charon-distributed-validator-node), the kubernetes [manifests](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node), or the [helm chart](https://github.com/ObolNetwork/helm-charts) example repos to get your nodes up and connected after loading the `.charon` folder artifacts into them appropriately.
+
+:::warning Right now, the `charon create cluster` command [used earlier to create the private keys](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/alone/create-keys/README.md) outputs a folder structure like `cluster/node*/`. Make sure to grab the `./node*/` folders, _rename_ them to `.charon` and then move them to one of the single node repos above. Once all nodes are online, synced, and connected, you will be ready to activate your validator. :::
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/test-locally.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/test-locally.md
new file mode 100644
index 0000000000..7f1bb6aa12
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/alone/test-locally.md
@@ -0,0 +1,81 @@
+---
+sidebar_position: 1
+description: Test the solo cluster locally
+---
+
+# Run a test cluster locally
+:::warning
+This is a demo repo to understand how Distributed Validators work and **is not suitable for a mainnet deployment**.
+
+This guide only runs one Execution Client, one Consensus Client, and 6 Distributed Validator Charon Client + Validator Client pairs on a single docker instance. As a consequence, if this machine fails, there will not be fault tolerance.
+
+Follow these two guides sequentially instead for production deployment: [create keys centrally](./create-keys.md) and [how to deploy them](./deploy.md).
+:::
+
+The [`charon-distributed-validator-cluster`](https://github.com/ObolNetwork/charon-distributed-validator-cluster) repo contains six charon clients in separate docker containers along with an execution client and consensus client, simulating a Distributed Validator cluster running.
+
+The default cluster consists of:
+- [Nethermind](https://github.com/NethermindEth/nethermind), an execution layer client
+- [Lighthouse](https://github.com/sigp/lighthouse), a consensus layer client
+- Six [charon](https://github.com/ObolNetwork/charon) nodes
+- A mixture of validator clients:
+ - VC0: [Lighthouse](https://github.com/sigp/lighthouse)
+ - vc1: [Teku](https://github.com/ConsenSys/teku)
+ - vc2: [Nimbus](https://github.com/status-im/nimbus-eth2)
+ - vc3: [Lighthouse](https://github.com/sigp/lighthouse)
+ - vc4: [Teku](https://github.com/ConsenSys/teku)
+ - vc5: [Nimbus](https://github.com/status-im/nimbus-eth2)
+
+## Pre-requisites
+
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Ensure you have [git](https://git-scm.com/downloads) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Create the key shares locally
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+ `.env.sample` is a sample environment file that allows overriding default configuration defined in `docker-compose.yml`. Uncomment and set any variable to override its value.
+
+3. Create the artifacts needed to run a DV cluster by running the following command:
+
+ ```sh
+ # Enter required validator addresses
+ WITHDRAWAL_ADDR=
+ FEE_RECIPIENT_ADDR=
+
+ # Create a distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.0 create cluster --name="mycluster" --cluster-dir=".charon/cluster/" --withdrawal-addresses="${WITHDRAWAL_ADDR}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDR}" --nodes 6 --network goerli --num-validators=1
+ ```
+
+These commands will create six folders within `.charon/cluster`, one for each node created. You will need to rename `node*` to `.charon` for each folder to be found by the default `charon run` command, or you can use `charon run --private-key-file=".charon/cluster/node0/charon-enr-private-key" --lock-file=".charon/cluster/node0/cluster-lock.json"` for each instance of charon you start.
+
+## Start the cluster
+
+Run this command to start your cluster containers
+
+```sh
+# Start the distributed validator cluster
+docker compose up --build
+```
+Check the monitoring dashboard and see if things look all right
+
+```sh
+# Open Grafana
+open http://localhost:3000/d/laEp8vupp
+```
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/group/README.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/README.md
new file mode 100644
index 0000000000..56f83ad21c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/README.md
@@ -0,0 +1,2 @@
+# group
+
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/group/index.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/index.md
new file mode 100644
index 0000000000..d49a11896d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/index.md
@@ -0,0 +1,12 @@
+# Run a cluster as a group
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+:::info Running a Distributed Validator with others typically means that several operators run the various nodes of the cluster. In such a case, the key shares should be created with a [distributed key generation process](../../key-concepts.md#distributed-validator-key-generation-ceremony), avoiding the private key being stored in full, anywhere. :::
+
+There are two sequential user journeys when setting up a DV cluster with others. Each comes with its own quickstart:
+
+1. The [**Creator** (**Leader**) Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/group/group/quickstart-group-leader-creator/README.md), which outlines the steps to propose a Distributed Validator Cluster.
+ * In the **Creator** case, the person creating the cluster _will NOT_ be a node operator in the cluster.
+ * In the **Leader** case, the person creating the cluster _will_ be a node operator in the cluster.
+2. The [**Operator** Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/group/group/quickstart-group-operator/README.md) which outlines the steps to create a Distributed Validator Cluster proposed by a leader or creator using the above process.
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-cli.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-cli.md
new file mode 100644
index 0000000000..48cc1a4910
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-cli.md
@@ -0,0 +1,124 @@
+---
+sidebar_position: 3
+description: Run one node in a multi-operator distributed validator cluster using the CLI
+---
+
+# Using the CLI
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster via the CLI.
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+* Decide who the Leader or Creator of your cluster will be. Only them have to perform [step 2](quickstart-cli.md#step-2-leader-creates-the-dkg-configuration-file-and-distributes-it-to-everyone-else) and [step 5](quickstart-cli.md#step-5-activate-the-deposit-data) in this quickstart. They do not get any special privilege.
+ * In the **Leader** case, the operator creating the cluster will also operate a node in the cluster.
+ * In the **Creator** case, the cluster is created by an external party to the cluster.
+
+## Step 1. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, all operators (including the leader but NOT a creator) need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/faq/errors.mdx) for their charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.0 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+Finally, share your ENR with the leader or creator so that he/she can proceed to Step 2.
+
+## Step 2. Leader or Creator creates the DKG configuration file and distribute it to cluster operators
+
+1. The leader or creator of the cluster will prepare the `cluster-definition.json` file for the Distributed Key Generation ceremony using the `charon create dkg` command.
+
+```
+# Prepare an environment variable file
+cp .env.create_dkg.sample .env.create_dkg
+```
+
+2. Populate the `.env.create_dkg` file created with the `cluster name`, the `fee recipient` and `withdrawal Ethereum addresses`, and the `ENRs` of all the operators participating in the cluster.
+ * The file generated is hidden by default. To view it, run `ls -al` in your terminal. Else, if you are on `macOS`, press `Cmd + Shift + .` to view all hidden files in the finder application.
+3. Run the `charon create dkg` command that generates DKG cluster-definition.json file.
+
+```
+docker run --rm -v "$(pwd):/opt/charon" --env-file .env.create_dkg obolnetwork/charon:v0.19.0 create dkg
+```
+
+This command should output a file at `.charon/cluster-definition.json`. This file needs to be shared with the other operators in a cluster.
+
+## Step 3. Run the DKG
+
+After receiving the `cluster-definition.json` file created by the leader, cluster operators should ideally save it in the `.charon/` folder that was created during step 1, alternatively the `--definition-file` flag can override the default expected location for this file.
+
+Every cluster member then participates in the DKG ceremony. For Charon v1, this needs to happen relatively synchronously between participants at an agreed time.
+
+```
+# Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.0 dkg
+```
+
+> This is a helpful [video walkthrough](https://www.youtube.com/watch?v=94Pkovp5zoQ\&ab_channel=ObolNetwork).
+
+Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder. These include:
+
+* A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+* A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+* A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+:::warning Please make sure to create a backup of `.charon/validator_keys`. **If you lose your keys you won't be able to start the DV cluster successfully.** :::
+
+:::info The `cluster-lock` and `deposit-data` files are identical for each operator and can be copied if lost. :::
+
+## Step 4. Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (`geth`) and a consensus layer client (`lighthouse`).
+
+**Caution**: If you manually update `docker-compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
+
+**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.
+
+```
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up
+
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
+
+* That your charon client can connect to the configured beacon client.
+* That your charon client can connect to all peers.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually \~16 hours after the deposit is made).
+
+If at any point you need to turn off your node, you can run:
+
+```
+# Shut down the currently running distributed validator node
+docker compose down
+```
+
+:::tip In a Distributed Validator Cluster, it is important to have a low latency connection to your peers. Charon clients will use the NAT protocol to attempt to establish a direct connection to one another automatically. If this doesn't happen, you should port forward charon's p2p port to the public internet to facilitate direct connections. (The default port to expose is `:3610`). Read more about charon's networking [here](../../../charon/networking.md). :::
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-leader-creator.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-leader-creator.md
new file mode 100644
index 0000000000..7416669b19
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-leader-creator.md
@@ -0,0 +1,125 @@
+---
+sidebar_position: 1
+description: A leader/creator creates a cluster configuration to be shared with operators
+---
+
+# quickstart-group-leader-creator
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Creator & Leader Journey
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist with the preparation of a distributed validator key generation ceremony. Select the _Leader_ tab if you **will** be an operator participating in the cluster, and select the _Creator_ tab if you **will NOT** be an operator in the cluster.
+
+These roles hold no position of privilege in the cluster, they only set the initial terms of the cluster that the other operators agree to.
+
+The person creating the cluster will be a node operator in the cluster.\
+\
+
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+
+The person creating the cluster will not be a node operator in the cluster.
+
+### Overview Video
+
+### Step 1. Collect Ethereum addresses of the cluster operators
+
+Before starting the cluster creation, you will need to collect one Ethereum address per operator in the cluster. They will need to be able to sign messages through metamask with this address. Broader wallet support will be added in future.
+
+### Step 2. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, you need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/faq/errors.mdx#enrs-keys) for your charon client. Operators in your cluster will also need to do this step, as per their [quickstart](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-operator/README.md#step-2-create-and-back-up-a-private-key-for-charon). This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.0 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update docker permissions](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/faq/errors/README.md#docker-permission-denied-error) to allow the command to run successfully.
+
+:::warning Ensure you create a backup of the private key stored in the '.charon' folder, specifically at '.charon/charon-enr-private-key'. This is the file used to generate your private key. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+This step is not needed and you can move on to [Step 3](quickstart-group-leader-creator.md#step-3-create-the-dkg-configuration-file-and-distribute-it-to-cluster-operators).
+
+### Step 3. Create the DKG configuration file and distribute it to cluster operators
+
+You will prepare the configuration file for the distributed key generation ceremony using the launchpad.
+
+1. Go to the [DV Launchpad](https://goerli.launchpad.obol.tech)
+2. Connect your wallet
+
+
+
+3. Select `Create a Cluster with a group` then `Get Started`.
+
+
+
+4. Follow the flow and accept the advisories.
+5. Configure the Cluster
+
+ * Input the `Cluster Name` & `Cluster Size` (i.e. number of operators in the cluster). The threshold for the cluster to operate sucessfully will update automatically.
+ * ⚠️ Leave the `Non-Operator` toggle OFF.
+ * ⚠️ Turn the `Non-Operator` toggle ON.
+ * Input the Ethereum addresses for each operator collected during [step 1](quickstart-group-leader-creator.md#step-1-collect-ethereum-addresses-of-the-cluster-operators).
+ * Select the desired amount of validators (32 ETH each) the cluster will run.
+ * Paste your `ENR` generated at [Step 2](quickstart-group-leader-creator.md#step-2-create-and-back-up-a-private-key-for-charon).
+ * Select the `Withdrawal Addresses` method. Use `Single address` to receive the principal and fees to a single address or `Splitter Contracts` to share them among operators.
+ * Enter the `Withdrawal Address` that will receive the validator effective balance at exit and when balance skimming occurs.
+ * Enter the `Fee Recipient Address` to receive MEV rewards (if enabled), and block proposal priority fees.
+ * \
+
+
+ You can set them to be the same as your connected wallet address in one click.\
+ \
+
+
+ 
+
+ * Enter the Ethereum address to claim the validator principal (32 ether) at exit.
+ * Enter the Ethereum addresses and their percentage split of the validator's rewards. Validator rewards include consensus rewards, MEV rewards and proposal priority fees.
+
+ \
+
+
+ 
+
+ * Click `Create Cluster Configuration`
+
+* 6\. Review the cluster configuration
+* 6\. Deploy the Obol Splits contracts by signing the transaction with your wallet.
+* 7\. You will be asked to confirm your configuration and to sign:
+*
+ * The `config_hash`. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+ * The `operator_config_hash`. This is your acceptance of the terms as a participating node operator.
+ * Your `ENR`. Signing your ENR authorises the corresponding private key to act on your behalf in the cluster.
+* 7\. You will be asked to confirm your configuration and to sign:
+*
+ * The `config_hash`. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+
+8. Share your cluster invite link with the operators. Following the link will show you a screen waiting for other operators to accept the configuration you created.
+
+
+
+👉 Once every participating operator has signed their approval to the terms, you will continue the [**Operator** journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-operator/README.md#step-3-run-the-dkg) by completing the distributed key generation step.
+
+Your journey ends here and you can monitor with the link whether the operators confirm their agreement to the cluster by signing their approval. Future versions of the launchpad will allow a creator to track a distributed validator's lifecycle in its entirety.
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-operator.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-operator.md
new file mode 100644
index 0000000000..f2c7adb59e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-operator.md
@@ -0,0 +1,134 @@
+---
+sidebar_position: 1
+description: A node operator joins a DV cluster
+---
+
+# Operator Journey
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+The following instructions aim to assist a group of operators coordinating together to create a distributed validator cluster after receiving an cluster invite link from a leader or creator.
+
+## Overview Video
+
+## Pre-requisites
+
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Make sure `docker` is running before executing the commands below.
+
+## Step 1. Share an Ethereum address with your Leader or Creator
+
+Before starting the cluster creation, make sure you have shared an Ethereum address with your cluster **Leader** or **Creator**. If you haven't chosen someone as a Leader or Creator yet, please go back to the [Quickstart intro](index.md) and define one person to go through the [Leader & Creator Journey](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/group/quickstart-group-leader-creator/README.md) before moving forward.
+
+## Step 2. Create and back up a private key for charon
+
+In order to prepare for a distributed key generation ceremony, you need to create an [ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/faq/errors.mdx#enrs-keys) for your charon client. This ENR is a public/private key pair, and allows the other charon clients in the DKG to identify and connect to your node.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+
+# Create your charon ENR private key, this will create a charon-enr-private-key file in the .charon directory
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.0 create enr
+```
+
+You should expect to see a console output like
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update docker permissions](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/faq/errors/README.md#docker-permission-denied-error) to allow the command to run successfully.
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony and start the DV cluster successfully.** :::
+
+## Step 3. Join and sign the cluster configuration
+
+After receiving the invite link created by the **Leader** or **Creator**, you will be able to join and sign the cluster configuration created.
+
+1. Go to the DV launchpad link provided by the leader or creator.
+2. Connect your wallet using the Ethereum address provided to the leader in [step 1](quickstart-group-operator.md#step-1-share-an-ethereum-address-with-your-leader-or-creator).
+
+
+
+3. Review the operators addresses submitted and click `Get Started` to continue.
+
+
+
+4. Review and accept the advisories.
+5. Review the configuration created by the leader or creator and add your `ENR` generated in [step 2](quickstart-group-operator.md#step-2-create-and-back-up-a-private-key-for-charon).
+
+
+
+6. Sign the following with your wallet
+ * The config hash. This is a hashed representation of all of the details for this cluster.
+ * Your own `ENR`. This signature authorises the key represented by this ENR to act on your behalf in the cluster.
+7. Wait for all the other operators in your cluster to do the same.
+
+## Step 4. Run the DKG
+
+:::info For the [DKG](../../../charon/dkg.md) to complete, all operators need to be running the command simultaneously. It helps to coordinate an agreed upon time amongst operators at which to run the command. :::
+
+### Overview
+
+1. Once all operators successfully signed, your screen will automatically advance to the next step and look like this. Click `Continue`. If you closed the tab, just go back to the invite link shared by the leader and connect your wallet.
+
+
+
+2. You have two options to perform the DKG.
+
+ 1. **Option 1** and default is to copy and run the `docker` command on the screen into your terminal. It will retrieve the remote cluster details and begin the DKG process.
+ 2. **Option 2** (Manual DKG) is to download the `cluster-definition` file manually and move it to the hidden `.charon` folder. Then, every cluster member participates in the DKG ceremony by running the command displayed.
+
+ 
+3. Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder. These include:
+ * A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+ * A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+ * A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+:::warning Please make sure to create a backup of `.charon/validator_keys`. **If you lose your keys you won't be able to start the DV cluster successfully.** :::
+
+:::info The `cluster-lock` and `deposit-data` files are identical for each operator and can be copied if lost. :::
+
+## Step 5. Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term. This repo is configured to sync an execution layer client (`geth`) and a consensus layer client (`lighthouse`).
+
+**Caution**: If you manually update `docker compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.
+
+**Note**: If you have a `geth` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/geth`. This makes everything faster since you start from a synced geth node.
+
+```
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up
+
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+You should use the grafana dashboard to infer whether your cluster is healthy. In particular you should check:
+
+* That your charon client can connect to the configured beacon client.
+* That your charon client can connect to all peers directly.
+* That your validator client is connected to charon, and has the private keys it needs loaded and accessible.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually \~16 hours after the deposit is made).
+
+If at any point you need to turn off your node, you can run:
+
+```
+# Shut down the currently running distributed validator node
+docker compose down
+```
+
+:::tip In a Distributed Validator Cluster, it is important to have a low latency connection to your peers. Charon clients will use the NAT protocol to attempt to establish a direct connection to one another automatically. If this doesn't happen, you should port forward charon's p2p port to the public internet to facilitate direct connections. (The default port to expose is `:3610`). Read more about charon's networking [here](../../../charon/networking.md). :::
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/index.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/index.md
new file mode 100644
index 0000000000..ece2236a08
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/index.md
@@ -0,0 +1,8 @@
+# Quickstart Guides
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+There are two ways to set up a distributed validator and each comes with its own quickstart
+
+1. [Run a DV cluster as a **group**](group/index.md), where several operators run the nodes that make up the cluster. In this setup, the key shares are created using a distributed key generation process, avoiding the full private keys being stored in full in any one place. This approach can also be used by single operators looking to manage all nodes of a cluster but wanting to create the key shares in a trust-minimised fashion.
+2. [Run a DV cluster **alone**](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/quickstart/alone/create-keys/README.md), where a single operator runs all the nodes of the DV. Depending on trust assumptions, there is not necessarily the need to create the key shares via a DKG process. Instead the key shares can be created in a centralised manner, and distributed securely to the nodes.
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/quickstart-exit.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/quickstart-exit.md
new file mode 100644
index 0000000000..26d217e400
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/quickstart-exit.md
@@ -0,0 +1,261 @@
+---
+sidebar_position: 6
+description: Exit a validator
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+# Exit a DV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas. In the case of a DV, each charon node needs to broadcast a partial exit to the other nodes of the cluster. Once a threshold of partial exits has been received by any node, the full voluntary exit will be sent to the beacon chain.
+
+This process will take 27 hours or longer depending on the current length of the exit queue.
+
+:::info
+
+- A threshold of operators needs to run the exit command for the exit to succeed.
+- If a charon client restarts after the exit command is run but before the threshold is reached, it will lose the partial exits it has received from the other nodes. If all charon clients restart and thus all partial exits are lost before the required threshold of exit messages are received, operators will have to rebroadcast their partial exit messages.
+ :::
+
+## Run the `voluntary-exit` command on your validator client
+
+Run the appropriate command on your validator client to broadcast an exit message from your validator client to its upstream charon client.
+
+It needs to be the validator client that is connected to your charon client taking part in the DV, as you are only signing a partial exit message, with a partial private key share, which your charon client will combine with the other partial exit messages from the other operators.
+
+:::info
+
+- All operators need to use the same `EXIT_EPOCH` for the exit to be successful. Assuming you want to exit as soon as possible, the default epoch of `162304` included in the below commands should be sufficient.
+- Partial exits can be broadcasted by any validator client as long as the sum reaches the threshold for the cluster.
+ :::
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=162304`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=162304
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=162304 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=162304
: The epoch upon which to submit the voluntary exit.
+ --network=goerli
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=162304 --network=goerli --yes'`}
+
+
+
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=256`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=256
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=256 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=256
: The epoch upon which to submit the voluntary exit.
+ --network=holesky
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=256 --network=holesky --yes'`}
+
+
+
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=194048`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=194048
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=194048 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=194048
: The epoch upon which to submit the voluntary exit.
+ --network=mainnet
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=194048 --network=mainnet --yes'`}
+
+
+
+
+
+
+
+Once a threshold of exit signatures has been received by any single charon client, it will craft a valid voluntary exit message and will submit it to the beacon chain for inclusion. You can monitor partial exits stored by each node in the [Grafana Dashboard](https://github.com/ObolNetwork/charon-distributed-validator-node).
+
+## Exit epoch and withdrawable epoch
+
+The process of a validator exiting from staking takes variable amounts of time, depending on how many others are exiting at the same time.
+
+Immediately upon broadcasting a signed voluntary exit message, the exit epoch and withdrawable epoch values are calculated based off the current epoch number. These values determine exactly when the validator will no longer be required to be online performing validation, and when the validator is eligible for a full withdrawal respectively.
+
+1. Exit epoch - epoch at which your validator is no longer active, no longer earning rewards, and is no longer subject to slashing rules.
+ :::warning
+ Up until this epoch (while "in the queue") your validator is expected to be online and is held to the same slashing rules as always. Do not turn your DV node off until this epoch is reached.
+ :::
+2. Withdrawable epoch - epoch at which your validator funds are eligible for a full withdrawal during the next validator sweep.
+ This occurs 256 epochs after the exit epoch, which takes ~27.3 hours.
+
+## How to verify a validator exit
+
+Consult the examples below and compare them to your validator's monitoring to verify that exits from each operator in the cluster are being received. This example is a cluster of 4 nodes with 2 validators and threshold of 3 nodes broadcasting exits are needed.
+
+1. Operator 1 broadcasts an exit on validator client 1.
+ 
+ 
+2. Operator 2 broadcasts an exit on validator client 2.
+ 
+ 
+3. Operator 3 broadcasts an exit on validator client 3.
+ 
+ 
+
+At this point, the threshold of 3 has been reached and the validator exit process will start. The logs will show the following:
+
+
+:::tip
+Once a validator has broadcasted an exit message, it must continue to validate for at least 27 hours or longer. Do not shut off your distributed validator nodes until your validator is fully exited.
+:::
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/quickstart-mainnet.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/quickstart-mainnet.md
new file mode 100644
index 0000000000..ff293cc770
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/quickstart-mainnet.md
@@ -0,0 +1,103 @@
+---
+sidebar_position: 7
+description: Run a cluster on mainnet
+---
+
+# Run a DV on mainnet
+
+:::warning Charon is in a beta state, and **you should proceed only if you accept the risk, the** [**terms of use**](https://obol.tech/terms.pdf)**, and have tested running a Distributed Validator on a testnet first**.
+
+Distributed Validators created for goerli cannot be used on mainnet and vice versa, please take caution when creating, backing up, and activating mainnet validators. Incorrect usage may result in a loss of funds. :::
+
+This section is intended for users who wish to run their Distributed Validator on Ethereum mainnet.
+
+## Pre-requisites
+
+* You have [enough up-to-spec nodes](../key-concepts.md#distributed-validator-threshold) for your mainnet deployment.
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed on each node.
+* Ensure you have [git](https://git-scm.com/downloads) installed on each node.
+* Make sure `docker` is running before executing the commands below.
+
+## Steps
+
+### Using charon-distributed-validator-node in full
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) repo and `cd` into the directory.
+
+```sh
+# Clone this repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node
+```
+
+2. If you have already cloned the repo previously, make sure that it is [up-to-date](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/int/quickstart/update/README.md).
+3. Copy the `.env.sample.mainnet` file to `.env`
+
+```
+cp -n .env.sample.mainnet .env
+```
+
+4. Run the docker compose file
+
+```
+docker compose up -d
+```
+
+Once your clients can connect and sync appropriately, your DV stack is now mainnet ready 🎉
+
+### Using a remote mainnet beacon node
+
+:::warning Using a remote beacon node will impact the performance of your Distributed Validator and should be used sparingly. :::
+
+If you already have a mainnet beacon node running somewhere and you want to use that instead of running EL (`geth`) & CL (`lighthouse`) as part of the repo, you can disable these images. To do so, follow these steps:
+
+1. Copy the `docker-compose.override.yml.sample` file
+
+```
+cp -n docker-compose.override.yml.sample docker-compose.override.yml
+```
+
+2. Uncomment the `profiles: [disable]` section for both `geth` and `lighthouse`. The override file should now look like this
+
+```
+services:
+ geth:
+ # Disable geth
+ profiles: [disable]
+ # Bind geth internal ports to host ports
+ #ports:
+ #- 8545:8545 # JSON-RPC
+ #- 8551:8551 # AUTH-RPC
+ #- 6060:6060 # Metrics
+
+ lighthouse:
+ # Disable lighthouse
+ profiles: [disable]
+ # Bind lighthouse internal ports to host ports
+ #ports:
+ #- 5052:5052 # HTTP
+ #- 5054:5054 # Metrics
+...
+```
+
+3. Then, uncomment and set the `CHARON_BEACON_NODE_ENDPOINTS` variable in the `.env` file to your mainnet beacon node's URL
+
+```
+...
+# Connect to one or more external beacon nodes. Use a comma separated list excluding spaces.
+CHARON_BEACON_NODE_ENDPOINTS=
+...
+```
+
+4. Restart your docker compose
+
+```
+docker compose down
+docker compose up -d
+```
+
+### Exit a mainnet Distributed Validator
+
+If you want to exit your mainnet validator, refer to our [exit guide](quickstart-exit.md).
diff --git a/docs/versioned_docs/version-v0.19.0/int/quickstart/update.md b/docs/versioned_docs/version-v0.19.0/int/quickstart/update.md
new file mode 100644
index 0000000000..e6ca215bec
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/int/quickstart/update.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 5
+description: Update your DV cluster with the latest Charon release
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Update a DV
+
+It is highly recommended to upgrade your DV stack from time to time. This ensures that your node is secure, performant, up-to-date and you don't miss important hard forks.
+
+To do this, follow these steps:
+
+### Navigate to the node directory
+
+
+
+
+
+ cd charon-distributed-validator-node
+
+
+
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+
+
+
+### Pull latest changes to the repo
+```
+git pull
+```
+
+### Create (or recreate) your DV stack
+```
+docker compose up -d --build
+```
+:::warning
+If you run more than one node in a DV Cluster, please take caution upgrading them simultaneously. Particularly if you are updating or changing the validator client used or recreating disks. It is recommended to update nodes on a sequential basis to minimse liveness and safety risks.
+:::
+
+### Conflicts
+
+:::info
+You may get a `git conflict` error similar to this:
+:::
+```markdown
+error: Your local changes to the following files would be overwritten by merge:
+prometheus/prometheus.yml
+...
+Please commit your changes or stash them before you merge.
+```
+This is probably because you have made some changes to some of the files, for example to the `prometheus/prometheus.yml` file.
+
+To resolve this error, you can either:
+
+- Stash and reapply changes if you want to keep your custom changes:
+ ```
+ git stash # Stash your local changes
+ git pull # Pull the latest changes
+ git stash apply # Reapply your changes from the stash
+ ```
+ After reapplying your changes, manually resolve any conflicts that may arise between your changes and the pulled changes using a text editor or Git's conflict resolution tools.
+
+- Override changes and recreate configuration if you don't need to preserve your local changes and want to discard them entirely:
+ ```
+ git reset --hard # Discard all local changes and override with the pulled changes
+ docker-compose up -d --build # Recreate your DV stack
+ ```
+ After overriding the changes, you will need to recreate your DV stack using the updated files.
+ By following one of these approaches, you should be able to handle Git conflicts when pulling the latest changes to your repository, either preserving your changes or overriding them as per your requirements.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/intro.md b/docs/versioned_docs/version-v0.19.0/intro.md
new file mode 100644
index 0000000000..7c43380b83
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to Obol and Distributed Validator Technology
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 35 members that are spread across the world.
+
+The core team is building the Distributed Validator Protocol, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize Distributed Validators for solo or multi-operator staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.19.0/sc/README.md b/docs/versioned_docs/version-v0.19.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.19.0/sc/introducing-obol-splits.md b/docs/versioned_docs/version-v0.19.0/sc/introducing-obol-splits.md
new file mode 100644
index 0000000000..fb642befa5
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sc/introducing-obol-splits.md
@@ -0,0 +1,89 @@
+---
+sidebar_position: 1
+description: Smart contracts for managing Distributed Validators
+---
+
+# Obol Splits
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators. These contracts include:
+
+- Withdrawal Recipients: Contracts used for a validator's withdrawal address.
+- Split contracts: Contracts to split ether across multiple entities. Developed by [Splits.org](https://splits.org)
+- Split controllers: Contracts that can mutate a splitter's configuration.
+
+Two key goals of validator reward management are:
+
+1. To be able to differentiate reward ether from principal ether such that node operators can be paid a percentage the _reward_ they accrue for the principal provider rather than a percentage of _principal+reward_.
+2. To be able to withdraw the rewards in an ongoing manner without exiting the validator.
+
+Without access to the consensus layer state in the EVM to check a validator's status or balance, and due to the incoming ether being from an irregular state transition, neither of these requirements are easily satisfiable.
+
+The following sections outline different contracts that can be composed to form a solution for one or both goals.
+
+## Withdrawal Recipients
+
+Validators have two streams of revenue, the consensus layer rewards and the execution layer rewards. Withdrawal Recipients focus on the former, receiving the balance skimming from a validator with >32 ether in an ongoing manner, and receiving the principal of the validator upon exit.
+
+### Optimistic Withdrawal Recipient
+
+This is the primary withdrawal recipient Obol uses, as it allows for the separation of reward from principal, as well as permitting the ongoing withdrawal of accruing rewards.
+
+An Optimistic Withdrawal Recipient [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipient.sol) takes three inputs when deployed:
+
+- A _principal_ address: The address that controls where the principal ether will be transferred post-exit.
+- A _reward_ address: The address where the accruing reward ether is transferred to.
+- The amount of ether that makes up the principal.
+
+This contract **assumes that any ether that has appeared in it's address since it was last able to do balance accounting is skimming reward from an ongoing validator** (or number of validators) unless the change is > 16 ether. This means balance skimming is immediately claimable as reward, while an inflow of e.g. 31 ether is tracked as a return of principal (despite being slashed in this example).
+
+:::warning
+
+Worst-case mass slashings can theoretically exceed 16 ether, if this were to occur, the returned principal would be misclassified as a reward, and distributed to the wrong address. This risk is the drawback that makes this contract variant 'optimistic'. If you intend to use this contract type, **it is important you understand and accept this risk**, however minute.
+
+The alternative is to use an splits.org [waterfall contract](https://docs.splits.org/core/waterfall), which won't allow the claiming of rewards until all principal ether has been returned, meaning validators need to be exited for operators to claim their CL rewards.
+
+:::
+
+This contract fits both design goals and can be used with thousands of validators. If you deploy an Optimistic Withdrawal Recipient with a principal higher than you actually end up using, nothing goes wrong. If you activate more validators than you specified in your contract deployment, you will record too much ether as reward and will overpay your reward address with ether that was principal ether, not earned ether. Current iterations of this contract are not designed for editing the amount of principal set.
+
+#### OWR Factory Deployment
+
+The OptimisticWithdrawalRecipient contract is deployed via a [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipientFactory.sol). The factory is deployed at the following addresses on the following chains.
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | [0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522](https://etherscan.io/address/0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522) |
+| Goerli | [0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26](https://goerli.etherscan.io/address/0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26) |
+| Holesky | |
+| Sepolia | [0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a](https://sepolia.etherscan.io/address/0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a) |
+
+### Exitable Withdrawal Recipient
+
+A much awaited feature for proof of stake Ethereum is the ability to trigger the exit of a validator with only the withdrawal address. This is tracked in [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002). Support for this feature will be inheritable in all other withdrawal recipient contracts. This will mitigate the risk to a principal provider of funds being stuck, or a validator being irrecoverably offline.
+
+## Split Contracts
+
+A split, or splitter, is a set of contracts that can divide ether or an ERC20 across a number of addresses. Splits are often used in conjunction with withdrawal recipients. Execution Layer rewards for a DV are directed to a split address through the use of a `fee recipient` address. Splits can be either immutable, or mutable by way of an admin address capable of updating them.
+
+Further information about splits can be found on the splits.org team's [docs site](https://docs.splits.org/). The addresses of their deployments can be found [here](https://docs.splits.org/core/split#addresses).
+
+## Split Controllers
+
+Splits can be completely edited through the use of the `controller` address, however, total editability of a split is not always wanted. A permissive controller and a restrictive controller are given as examples below.
+
+### (Gnosis) SAFE wallet
+
+A [SAFE](https://safe.global/) is a common method to administrate a mutable split. The most well-known deployment of this pattern is the [protocol guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html). The SAFE can arbitrarily update the split to any set of addresses with any valid set of percentages.
+
+### Immutable Split Controller
+
+This is a [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitController.sol) that updates one split configuration with another, exactly once. Only a permissioned address can trigger the change. This contract is suitable for changing a split at an unknown point in future to a configuration pre-defined at deployment.
+
+The Immutable Split Controller [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitControllerFactory.sol) can be found at the following addresses:
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | |
+| Goerli | [0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f](https://goerli.etherscan.io/address/0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f) |
+| Holesky | |
+| Sepolia | |
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.0/sec/README.md b/docs/versioned_docs/version-v0.19.0/sec/README.md
new file mode 100644
index 0000000000..aeb3b02cce
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sec/README.md
@@ -0,0 +1,2 @@
+# sec
+
diff --git a/docs/versioned_docs/version-v0.19.0/sec/bug-bounty.md b/docs/versioned_docs/version-v0.19.0/sec/bug-bounty.md
new file mode 100644
index 0000000000..48c52d89b4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sec/bug-bounty.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 2
+description: Bug Bounty Policy
+---
+
+# Obol Bug Bounty
+
+## Overview
+
+Obol Labs is committed to ensuring the security of our distributed validator software and services. As part of our commitment to security, we have established a bug bounty program to encourage security researchers to report vulnerabilities in our software and services to us so that we can quickly address them.
+
+## Eligibility
+
+To participate in the Bug Bounty Program you must:
+
+- Not be a resident of any country that does not allow participation in these types of programs
+- Be at least 14 years old and have legal capacity to agree to these terms and participate in the Bug Bounty Program
+- Have permission from your employer to participate
+- Not be (for the previous 12 months) an Obol Labs employee, immediate family member of an Obol employee, Obol contractor, or Obol service provider.
+
+## Scope
+
+The bug bounty program applies to software and services that are built by Obol. Only submissions under the following domains are eligible for rewards:
+
+- Charon DVT Middleware
+- DV Launchpad
+- Obol’s Public API
+- Obol’s Smart Contracts and the contracts they depend on.
+- Obol’s Public Relay
+
+Additionally, all vulnerabilities that require or are related to the following are out of scope:
+
+- Social engineering
+- Rate Limiting (Non-critical issues)
+- Physical security
+- Non-security-impacting UX issues
+- Vulnerabilities or weaknesses in third party applications that integrate with Obol
+- The Obol website or the Obol infrastructure in general is NOT part of this bug bounty program.
+
+## Rules
+
+- Bug has not been publicly disclosed
+- Vulnerabilities that have been previously submitted by another contributor or already known by the Obol development team are not eligible for rewards
+- The size of the bounty payout depends on the assessment of the severity of the exploit. Please refer to the rewards section below for additional details
+- Bugs must be reproducible in order for us to verify the vulnerability. Submissions with a working proof of concept is necessary
+- Rewards and the validity of bugs are determined by the Obol security team and any payouts are made at their sole discretion
+- Terms and conditions of the Bug Bounty program can be changed at any time at the discretion of Obol
+- Details of any valid bugs may be shared with complementary protocols utilised in the Obol ecosystem in order to promote ecosystem cohesion and safety.
+
+## Rewards
+
+The rewards for participating in our bug bounty program will be based on the severity and impact of the vulnerability discovered. We will evaluate each submission on a case-by-case basis, and the rewards will be at Obol’s sole discretion.
+
+### Low: up to $500
+
+A Low-level vulnerability is one that has a limited impact and can be easily fixed. Unlikely to have a meaningful impact on availability, integrity, and/or loss of funds.
+
+- Low impact, medium likelihood
+- Medium impact, low likelihood
+ Examples:
+- Attacker can sometimes put a charon node in a state that causes it to drop one out of every one hundred attestations made by a validator
+
+### Medium: up to $1,000
+
+A Medium-level vulnerability is one that has a moderate impact and requires a more significant effort to fix. Possible to have an impact on validator availability, integrity, and/or loss of funds.
+
+- High impact, low likelihood
+- Medium impact, medium likelihood
+- Low impact, high likelihood
+ Examples:
+- Attacker can successfully conduct eclipse attacks on the cluster nodes with peer-ids with 4 leading zero bytes.
+
+### High: up to $4,000
+
+A High-level vulnerability is one that has a significant impact on the security of the system and requires a significant effort to fix. Likely to have impact on availability, integrity, and/or loss of funds.
+
+- High impact, medium likelihood
+- Medium impact, high likelihood
+ Examples:
+- Attacker can successfully partition the cluster and keep the cluster offline.
+
+### Critical: up to $10,000
+
+A Critical-level vulnerability is one that has a severe impact on the security of the in-production system and requires immediate attention to fix. Highly likely to have a material impact on availability, integrity, and/or loss of funds.
+
+- High impact, high likelihood
+ Examples:
+- Attacker can successfully conduct remote code execution in charon client to exfiltrate BLS private key material.
+
+We may offer rewards in the form of cash, merchandise, or recognition. We will only award one reward per vulnerability discovered, and we reserve the right to deny a reward if we determine that the researcher has violated the terms and conditions of this policy.
+
+## Submission process
+
+Please email security@obol.tech
+
+Your report should include the following information:
+
+- Description of the vulnerability and its potential impact
+- Steps to reproduce the vulnerability
+- Proof of concept code, screenshots, or other supporting documentation
+- Your name, email address, and any contact information you would like to provide.
+ Reports that do not include sufficient detail will not be eligible for rewards.
+
+## Disclosure Policy
+
+Obol Labs will disclose the details of the vulnerability and the researcher’s identity (with their consent) only after we have remediated the vulnerability and issued a fix. Researchers must keep the details of the vulnerability confidential until Obol Labs has acknowledged and remediated the issue.
+
+## Legal Compliance
+
+All participants in the bug bounty program must comply with all applicable laws, regulations, and policy terms and conditions. Obol will not be held liable for any unlawful or unauthorised activities performed by participants in the bug bounty program.
+
+We will not take any legal action against security researchers who discover and report security vulnerabilities in accordance with this bug bounty policy. We do, however, reserve the right to take legal action against anyone who violates the terms and conditions of this policy.
+
+## Non-Disclosure Agreement
+
+All participants in the bug bounty program will be required to sign a non-disclosure agreement (NDA) before they are given access to closed source software and services for testing purposes.
diff --git a/docs/versioned_docs/version-v0.19.0/sec/contact.md b/docs/versioned_docs/version-v0.19.0/sec/contact.md
new file mode 100644
index 0000000000..e66e1663e2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sec/contact.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 3
+description: Security details for the Obol Network
+---
+
+# Contacts
+
+Please email security@obol.tech to report a security incident, vulnerability, bug or inquire about Obol's security.
+
+Also, visit the [obol security repo](https://github.com/ObolNetwork/obol-security) for more details.
diff --git a/docs/versioned_docs/version-v0.19.0/sec/ev-assessment.md b/docs/versioned_docs/version-v0.19.0/sec/ev-assessment.md
new file mode 100644
index 0000000000..a8ce756359
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sec/ev-assessment.md
@@ -0,0 +1,295 @@
+---
+sidebar_position: 4
+description: Software Development Security Assessment
+---
+
+# ev-assessment
+
+## Software Development at Obol
+
+When hardening a projects technical security, team member's operational security, and the security of the software development practices in use by the team are some of the most criticial areas to secure. Many hacks and compromises in the space to date have been a result of these attack vectors rather than exploits of the software itself.
+
+With this in mind, in January 2023 the Obol team retained the expertise of Ethereal Venture's security researcher Alex Wade; to interview key stakeholders and produce a report into the teams Software Development Lifecycle.
+
+The below page is a result of the report that was produced. What is present here has had some sensitive information redacted, and contains responses to the recommendations made, detailing the actions the Obol team have taken to mitigate what has been highlighted.
+
+## Obol Report
+
+**Prepared by: Alex Wade (Ethereal Ventures)** **Date: Jan 2023**
+
+Over the past month, I worked with Obol to review their software development practices in preparation for their upcoming security audits. My goals were to review and analyze:
+
+* Software development processes
+* Vulnerability disclosure and escalation procedures
+* Key personnel risk
+
+The information in this report was collected through a series of interviews with Obol’s project leads.
+
+### Contents:
+
+* Background Info
+* Analysis - Cluster Setup and DKG
+ * Key Risks
+ * Potential Attack Scenarios
+* Recommendations
+ * R1: Users should deploy cluster contracts through a known on-chain entry point
+ * R2: Users should deposit to the beacon chain through a pool contract
+ * R3: Raise the barrier to entry to push an update to the Launchpad
+* Additional Notes
+ * Vulnerability Disclosure
+ * Key Personnel Risk
+
+### Background Info
+
+**Each team lead was asked to describe Obol in terms of its goals, objectives, and key features.**
+
+#### What is Obol?
+
+Obol builds DVT (Distributed Validator Technology) for Ethereum.
+
+#### What is Obol’s goal?
+
+Obol’s goal is to solve a classic distributed systems problem: uptime.
+
+Rather than requiring Ethereum validators to stake on their own, Obol allows groups of operators to stake together. Using Obol, a single validator can be run cooperatively by multiple people across multiple machines.
+
+In theory, this architecture provides validators with some redundancy against common issues: server and power outages, client failures, and more.
+
+#### What are Obol’s objectives?
+
+Obol’s business objective is to provide base-layer infrastructure to support a distributed validator ecosystem. As Obol provides base layer technology, other companies and projects will build on top of Obol.
+
+Obol’s business model is to eventually capture a portion of the revenue generated by validators that use Obol infrastructure.
+
+#### What is Obol’s product?
+
+Obol’s product consists of three main components, each run by its own team: a webapp, a client, and smart contracts.
+
+* [DV Launchpad](../dvl/intro.md): A webapp to create and manage distributed validators.
+* [Charon](../charon/intro.md): A middleware client that enables operators to run distributed validators.
+* [Solidity](../sc/introducing-obol-splits.md): Withdrawal and fee recipient contracts for use with distributed validators.
+
+### Analysis - Cluster Setup and DKG
+
+The Launchpad guides users through the process of creating a cluster, which defines important parameters like the validator’s fee recipient and withdrawal addresses, as well as the identities of the operators in the cluster. In order to ensure their cluster configuration is correct, users need to rely on a few different factors.
+
+**First, users need to trust the Charon client** to perform the DKG correctly, and validate things like:
+
+* Config file is well-formed and is using the expected version
+* Signatures and ENRs from other operators are valid
+* Cluster config hash is correct
+* DKG succeeds in producing valid signatures
+* Deposit data is well-formed and is correctly generated from the cluster config and DKG.
+
+However, Charon’s validation is limited to the digital: signature checks, cluster file syntax, etc. It does NOT help would-be operators determine whether the other operators listed in their cluster definition are the real people with whom they intend to start a DVT cluster. So -
+
+**Second, users need to come to social consensus with fellow operators.** While the cluster is being set up, it’s important that each operator is an active participant. Each member of the group must validate and confirm that:
+
+* the cluster file correctly reflects their address and node identity, and reflects the information they received from fellow operators
+* the cluster parameters are expected – namely, the number of validators and signing threshold
+
+**Finally, users need to perform independent validation.** Each user should perform their own validation of the cluster definition:
+
+* Is my information correct? (address and ENR)
+* Does the information I received from the group match the cluster definition?
+* Is the ETH2 deposit data correct, and does it match the information in the cluster definition?
+* Are the withdrawal and fee recipient addresses correct?
+
+These final steps are potentially the most difficult, and may require significant technical knowledge.
+
+### Key Risks
+
+#### 1. Validation of Contract Deployment and Deposit Data Relies Heavily on Launchpad
+
+From my interviews, it seems that the user deploys both the withdrawal and fee recipient contracts through the Launchpad.
+
+What I’m picturing is that during the first parts of the cluster setup process, the user is prompted to sign one or more transactions deploying the withdrawal and fee recipient contracts to mainnet. The Launchpad apparently uses an npm package to deploy these contracts: `0xsplits/splits-sdk`, which I assume provides either JSON artifacts or a factory address on chain. The Launchpad then places the deployed contracts into the cluster config file, and the process moves on.
+
+If an attacker has published a malicious update to the Launchpad (or compromised an underlying dependency), the contracts deployed by the Launchpad may be malicious. The questions I’d like to pose are:
+
+* How does the group creator know the Launchpad deployed the correct contracts?
+* How does the rest of the group know the creator deployed the contracts through the Launchpad?
+
+My understanding is that this ultimately comes down to the independent verification that each of the group’s members performs during and after the cluster’s setup phase.
+
+At its worst, this verification might consist solely of the cluster creator confirming to the others that, yes, those addresses match the contracts I deployed through the Launchpad.
+
+A more sophisticated user might verify that not only do the addresses match, but the deployed source code looks roughly correct. However, this step is far out of the realm of many would-be validators. To be really certain that the source code is correct would require auditor-level knowledge.
+
+The risk is that:
+
+* the deployed contracts are NOT the correctly-configured 0xsplits waterfall/fee splitter contracts
+* most users are ill-equipped to make this determination themselves
+* we don’t want to trust the Launchpad as the single source of truth
+
+In the worst case, the cluster may end up depositing with malicious withdrawal or fee recipient credentials. If unnoticed, this may net an attacker the entire withdrawal amount, once the cluster exits.
+
+Note that the same (or similar) risks apply to validation of deposit data, which has the potential to be similarly difficult. I’m a little fuzzy on which part of the Obol stack actually generates the deposit data / deposit transaction, so I can’t speak to this as much. However, I think the mitigation for both of these is roughly the same - read on!
+
+**Mitigation:**
+
+It’s certainly a good idea to make it harder to deploy malicious updates to the Launchpad, but this may not be entirely possible. A higher-yield strategy may be to educate and empower users to perform independent validation of the DVT setup process - without relying on information fed to them by Charon and the Launchpad.
+
+I’ve outlined some ideas for this in #R1 and #R2.
+
+#### 2. Social Consensus, aka “Who sends the 32 ETH?”
+
+Depositing to the beacon chain requires a total of 32 ETH. Obol’s product allows multiple operators to act as a single validator together, which means would-be operators need to agree on how to fund the 32 ETH needed to initiate the deposit.
+
+It is my understanding that currently, this process comes down to trust and loose social consensus. Essentially, the group needs to decide who chips in what amount together, and then trust someone to take the 32 ETH and complete the deposit process correctly (without running away with the money).
+
+Granted, the initial launch of Obol will be open only to a small group of people as the kinks in the system get worked out - but in preparation for an eventual public release, the deposit process needs to be much simpler and far less reliant on trust.
+
+Mitigation: See #R2.
+
+**Potential Attack Scenarios**
+
+During the interview process, I learned that each of Obol’s core components has its own GitHub repo, and that each repo has roughly the same structure in terms of organization and security policies. For each repository:
+
+* There are two overall github organization administrators, and a number of people have administrative control over individual repositories.
+* In order to merge PRs, the submitter needs:
+ * CI/CD checks to pass
+ * Review from one person (anyone at Obol)
+
+Of course, admin access also means the ability to change these settings - so repo admins could theoretically merge PRs without needing checks to pass, and without review/approval, organization admins can control the full GitHub organization.
+
+The following scenarios describe the impact an attack may have.
+
+**1. Publishing a malicious version of the Launchpad, or compromising an underlying dependency**
+
+* Reward: High
+* Difficulty: Medium-Low
+
+As described in Key Risks, publishing a malicious version of the Launchpad has the potential to net the largest payout for an attacker. By tampering with the cluster’s deposit data or withdrawal/fee recipient contracts, an attacker stands to gain 32 ETH or more per compromised cluster.
+
+During the interviews, I learned that merging PRs to main in the Launchpad repo triggers an action that publishes to the site. Given that merges can be performed by an authorized Obol developer, this makes the developers prime targets for social engineering attacks.
+
+Additionally, the use of the `0xsplits/splits-sdk` NPM package to aid in contract deployment may represent a supply chain attack vector. It may be that this applies to other Launchpad dependencies as well.
+
+In any case, with a fairly large surface area and high potential reward, this scenario represents a credible risk to users during the cluster setup and DKG process.
+
+See #R1, #R2, and #R3 for some ideas to address this scenario.
+
+**2. Publishing a malicious version of Charon to new operators**
+
+* Reward: Medium
+* Difficulty: High
+
+During the cluster setup process, Charon is responsible both for validating the cluster configuration produced by the Launchpad, as well as performing a DKG ceremony between a group’s operators.
+
+If new operators use a malicious version of Charon to perform this process, it may be possible to tamper with both of these responsibilities, or even get access to part or all of the underlying validator private key created during DKG.
+
+However, the difficulty of this type of attack seems quite high. An attacker would first need to carry out the same type of social engineering attack described in scenario 1 to publish and tag a new version of Charon. Crucially, users would also need to install the malicious version - unlike the Launchpad, an update here is not pushed directly to users.
+
+As long as Obol is clear and consistent with communication around releases and versioning, it seems unlikely that a user would both install a brand-new, unannounced release, and finish the cluster setup process before being warned about the attack.
+
+**3. Publishing a malicious version of Charon to existing validators**
+
+* Reward: Low
+* Difficulty: High
+
+Once a distributed validator is up and running, much of the danger has passed. As a middleware client, Charon sits between a validator’s consensus and validator clients. As such, it shouldn’t have direct access to a validator’s withdrawal keys nor signing keys.
+
+If existing validators update to a malicious version of Charon, it’s likely the worst thing an attacker could theoretically do is slash the validator, however, assuming charon has no access to any private keys, this would be predicated on one or more validator clients connected to charon also failing to prevent the signing of a slashable message. In practice, a compromised charon client is more likely to pose liveness risks than safety risks.
+
+This is not likely to be particularly motivating to potential attackers - and paired with the high difficulty described above, this scenario seems unlikely to cause significant issues.
+
+### Recommendations
+
+#### R1: Users should deploy cluster contracts through a known on-chain entry point
+
+During setup, users should only sign one transaction via the Launchpad - to a contract located at an Obol-held ENS (e.g. `launchpad.obol.eth`). This contract should deploy everything needed for the cluster to operate, like the withdrawal and fee recipient contracts. It should also initialize them with the provided reward split configuration (and any other config needed).
+
+Rather than using an NPM library to supply a factory address or JSON artifacts, this has the benefit of being both:
+
+* **Harder to compromise:** as long as the user knows launchpad.obol.eth, it’s pretty difficult to trick them into deploying the wrong contracts.
+* **Easier to validate** for non-technical users: the Obol contract can be queried for deployment information via etherscan. For example:
+
+
+
+Note that in order for this to be successful, Obol needs to provide detailed steps for users to perform manual validation of their cluster setups. Users should be able to treat this as a “checklist:”
+
+* Did I send a transaction to `launchpad.obol.eth`?
+* Can I use the ENS name to locate and query the deployment manager contract on etherscan?
+* If I input my address, does etherscan report the configuration I was expecting?
+ * withdrawal address matches
+ * fee recipient address matches
+ * reward split configuration matches
+
+As long as these steps are plastered all over the place (i.e. not just on the Launchpad) and Obol puts in effort to educate users about the process, this approach should allow users to validate cluster configurations themselves - regardless of Launchpad or NPM package compromise.
+
+**Obol’s response:**
+
+Roadmapped: add the ability for the OWR factory to claim and transfer its reverse resolution ownership.
+
+#### R2: Users should deposit to the beacon chain through a pool contract
+
+Once cluster setup and DKG is complete, a group of operators should deposit to the beacon chain by way of a pool contract. The pool contract should:
+
+* Accept Eth from any of the group’s operators
+* Stop accepting Eth when the contract’s balance hits (32 ETH \* number of validators)
+* Make it easy to pull the trigger and deposit to the beacon chain once the critical balance has been reached
+* Offer all of the group’s operators a “bail” option at any point before the deposit is triggered
+
+Ideally, this contract is deployed during the setup process described in #R1, as another step toward allowing users to perform independent validation of the process.
+
+Rather than relying on social consensus, this should:
+
+* Allow operators to fund the validator without needing to trust any single party
+* Make it harder to mess up the deposit or send funds to some malicious actor, as the pool contract should know what the beacon deposit contract address is
+
+**Obol’s response:**
+
+Roadmapped: give the operators a streamlined, secure way to deposit Ether (ETH) to the beacon chain collectively, satisfying specific conditions:
+
+* Pooling from multiple operators.
+* Ceasing to accept ETH once a critical balance is reached, defined by 32 ETH multiplied by the number of validators.
+* Facilitating an immediate deposit to the beacon chain once the target balance is reached.
+* Provide a 'bail-out' option for operators to withdraw their contribution before initiating the group's deposit to the beacon chain.
+
+#### R3: Raise the barrier to entry to push an update to the Launchpad
+
+Currently, any repo admin can publish an update to the Launchpad unchecked.
+
+Given the risks and scenarios outlined above, consider amending this process so that the sole compromise of either admin is not sufficient to publish to the Launchpad site. It may be worthwhile to require both admins to approve publishing to the site.
+
+Along with simply adding additional prerequisites to publish an update to the Launchpad, ensure that both admins have enabled some level of multi-factor authentication on their GitHub accounts.
+
+**Obol’s response:**
+
+We removed individual’s ability to merge changes without review, enforced MFA, signed commits, and employed Bulldozer bot to make sure a PR gets merged automatically when all checks pass.
+
+### Additional Notes
+
+#### Vulnerability Disclosure
+
+During the interviews, I got some conflicting information when asking about Obol’s vulnerability disclosure process.
+
+Some interviewees directed me towards Obol’s security repo, which details security contacts: [ObolNetwork/obol-security](https://github.com/ObolNetwork/obol-security), while some answered that disclosure should happen primarily through Immunefi. While these may both be part of the correct answer, it seems that Obol’s disclosure process may not be as well-defined as it could be. Here are some notes:
+
+* I wasn’t able to find information about Obol on Immunefi. I also didn’t find any reference to a security contact or disclosure policy in Obol’s docs.
+* When looking into the obol security repo, I noticed broken links in a few of the sections in README.md and SECURITY.md:
+ * Security policy
+ * More Information
+* Some of the text and links in the Bug Bounty Program don’t seem to apply to Obol (see text referring to Vaults and Strategies).
+* The Receiving Disclosures section does not include a public key with which submitters can encrypt vulnerability information.
+
+It’s my understanding that these items are probably lower priority due to Obol’s initial closed launch - but these should be squared away soon! \[Obol response to latest vuln disclosure process goes here]
+
+**Obol’s response:**
+
+we addressed all of the concerns in the obol-security repository:
+
+1. The security policy link has been fixed
+2. The Bug Bounty program received an overhaul and clearly states rewards, eligibility, and scope
+3. We list two GPG public keys for which we accept encrypted vulnerabilities reports.
+
+We are actively working towards integrating Immunefi in our security pipeline.
+
+#### Key Personnel Risk
+
+A final section on the specifics of key personnel risk faced by Obol has been redacted from the original report. Particular areas of control highlighted were github org ownership and domain name control.
+
+**Obol’s response:**
+
+These risks have been mitigated by adding an extra admin to the github org, and by setting up a second DNS stack in case the primary one fails, along with general Opsec improvements.
diff --git a/docs/versioned_docs/version-v0.19.0/sec/overview.md b/docs/versioned_docs/version-v0.19.0/sec/overview.md
new file mode 100644
index 0000000000..31e7835c06
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sec/overview.md
@@ -0,0 +1,33 @@
+---
+sidebar_position: 1
+description: Security Overview
+---
+
+# Overview
+
+This page serves as an overview of the Obol Network from a security point of view.
+
+This page is updated quarterly. The last update was on 2023-10-01.
+
+## Table of Contents
+
+1. [List of Security Audits and Assessments](overview.md#list-of-security-audits-and-assessments)
+2. [Security Focused Documents](overview.md#security-focused-documents)
+3. [Bug Bounty Details](bug-bounty.md)
+
+## List of Security Audits and Assessments
+
+The completed audits reports are linked [here](https://github.com/ObolNetwork/obol-security/tree/main/audits).
+
+* A review of Obol Labs [development processes](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/sec/ev-assessment/README.md) by Ethereal Ventures
+* A [security assessment](https://github.com/ObolNetwork/obol-security/blob/f9d7b0ad0bb8897f74ccb34cd4bd83012ad1d2b5/audits/Sigma_Prime_Obol_Network_Charon_Security_Assessment_Report_v2_1.pdf) of Charon by [Sigma Prime](https://sigmaprime.io/).
+* A [solidity audit](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/sec/smart_contract_audit/README.md) of the Obol Splits contracts by [Zach Obront](https://zachobront.com/).
+* A second audit of Charon is planned for Q4 2023.
+
+## Security focused documents
+
+* A [threat model](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.0/sec/threat_model/README.md) for a DV middleware client like charon.
+
+## Bug Bounty
+
+Information related to disclosing bugs and vulnerabilities to Obol can be found on [the next page](bug-bounty.md).
diff --git a/docs/versioned_docs/version-v0.19.0/sec/smart_contract_audit.md b/docs/versioned_docs/version-v0.19.0/sec/smart_contract_audit.md
new file mode 100644
index 0000000000..310f843be2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sec/smart_contract_audit.md
@@ -0,0 +1,477 @@
+---
+sidebar_position: 5
+description: Smart Contract Audit
+---
+
+# Smart Contract Audit
+
+| | |
+| ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|  | Obol Audit Report
Obol Manager Contracts
Prepared by: Zach Obront, Independent Security Researcher
Date: Sept 18 to 22, 2023
|
+
+## About **Obol**
+
+The Obol Network is an ecosystem for trust minimized staking that enables people to create, test, run & co-ordinate distributed validators.
+
+The Obol Manager contracts are responsible for distributing validator rewards and withdrawals among the validator and node operators involved in a distributed validator.
+
+## About **zachobront**
+
+Zach Obront is an independent smart contract security researcher. He serves as a Lead Senior Watson at Sherlock, a Security Researcher at Spearbit, and has identified multiple critical severity bugs in the wild, including in a Top 5 Protocol on Immunefi. You can say hi on Twitter at [@zachobront](http://twitter.com/zachobront).
+
+## Summary & Scope
+
+The [ObolNetwork/obol-manager-contracts](https://github.com/ObolNetwork/obol-manager-contracts/) repository was audited at commit [50ce277919723c80b96f6353fa8d1f8facda6e0e](https://github.com/ObolNetwork/obol-manager-contracts/tree/50ce277919723c80b96f6353fa8d1f8facda6e0e).
+
+The following contracts were in scope:
+
+* src/controllers/ImmutableSplitController.sol
+* src/controllers/ImmutableSplitControllerFactory.sol
+* src/lido/LidoSplit.sol
+* src/lido/LidoSplitFactory.sol
+* src/owr/OptimisticWithdrawalReceiver.sol
+* src/owr/OptimisticWithdrawalReceiverFactory.sol
+
+After completion of the fixes, the [2f4f059bfd145f5f05d794948c918d65d222c3a9](https://github.com/ObolNetwork/obol-manager-contracts/tree/2f4f059bfd145f5f05d794948c918d65d222c3a9) commit was reviewed. After this review, the updated Lido fee share system in [PR #96](https://github.com/ObolNetwork/obol-manager-contracts/pull/96/files) was reviewed.
+
+## Summary of Findings
+
+| Identifier | Title | Severity | Fixed |
+| :-----------------------------------------------------------------------------------------------------------------------: | -------------------------------------------------------------------------------------- | :-----------: | :---: |
+| [M-01](smart_contract_audit.md#m-01-future-fees-may-be-skirted-by-setting-a-non-eth-reward-token) | Future fees may be skirted by setting a non-ETH reward token | Medium | ✓ |
+| [M-02](smart_contract_audit.md#m-02-splits-with-256-or-more-node-operators-will-not-be-able-to-switch-on-fees) | Splits with 256 or more node operators will not be able to switch on fees | Medium | ✓ |
+| [M-03](smart_contract_audit.md#m-03-in-a-mass-slashing-event-node-operators-are-incentivized-to-get-slashed) | In a mass slashing event, node operators are incentivized to get slashed | Medium | |
+| [L-01](smart_contract_audit.md#l-01-obol-fees-will-be-applied-retroactively-to-all-non-distributed-funds-in-the-splitter) | Obol fees will be applied retroactively to all non-distributed funds in the Splitter | Low | ✓ |
+| [L-02](smart_contract_audit.md#l-02-if-owr-is-used-with-rebase-tokens-and-theres-a-negative-rebase-principal-can-be-lost) | If OWR is used with rebase tokens and there's a negative rebase, principal can be lost | Low | ✓ |
+| [L-03](smart_contract_audit.md#l-03-lidosplit-can-receive-eth-which-will-be-locked-in-contract) | LidoSplit can receive ETH, which will be locked in contract | Low | ✓ |
+| [L-04](smart_contract_audit.md#l-04-upgrade-to-latest-version-of-solady-to-fix-libclone-bug) | Upgrade to latest version of Solady to fix LibClone bug | Low | ✓ |
+| [G-01](smart_contract_audit.md#g-01-steth-and-wsteth-addresses-can-be-saved-on-implementation-to-save-gas) | stETH and wstETH addresses can be saved on implementation to save gas | Gas | ✓ |
+| [G-02](smart_contract_audit.md#g-02-owr-can-be-simplified-and-save-gas-by-not-tracking-distributedfunds) | OWR can be simplified and save gas by not tracking distributedFunds | Gas | ✓ |
+| [I-01](smart_contract_audit.md#i-01-strong-trust-assumptions-between-validators-and-node-operators) | Strong trust assumptions between validators and node operators | Informational | |
+| [I-02](smart_contract_audit.md#i-02-provide-node-operator-checklist-to-validate-setup) | Provide node operator checklist to validate setup | Informational | |
+
+## Detailed Findings
+
+### \[M-01] Future fees may be skirted by setting a non-ETH reward token
+
+Fees are planned to be implemented on the `rewardRecipient` splitter by updating to a new fee structure using the `ImmutableSplitController`.
+
+It is assumed that all rewards will flow through the splitter, because (a) all distributed rewards less than 16 ETH are sent to the `rewardRecipient`, and (b) even if a team waited for rewards to be greater than 16 ETH, rewards sent to the `principalRecipient` are capped at the `amountOfPrincipalStake`.
+
+This creates a fairly strong guarantee that reward funds will flow to the `rewardRecipient`. Even if a user were to set their `amountOfPrincipalStake` high enough that the `principalRecipient` could receive unlimited funds, the Obol team could call `distributeFunds()` when the balance got near 16 ETH to ensure fees were paid.
+
+However, if the user selects a non-ETH token, all ETH will be withdrawable only thorugh the `recoverFunds()` function. If they set up a split with their node operators as their `recoveryAddress`, all funds will be withdrawable via `recoverFunds()` without ever touching the `rewardRecipient` or paying a fee.
+
+#### Recommendation
+
+I would recommend removing the ability to use a non-ETH token from the `OptimisticWithdrawalRecipient`. Alternatively, if it feels like it may be a use case that is needed, it may make sense to always include ETH as a valid token, in addition to any `OWRToken` set.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[M-02] Splits with 256 or more node operators will not be able to switch on fees
+
+0xSplits is used to distribute rewards across node operators. All Splits are deployed with an ImmutableSplitController, which is given permissions to update the split one time to add a fee for Obol at a future date.
+
+The Factory deploys these controllers as Clones with Immutable Args, hard coding the `owner`, `accounts`, `percentAllocations`, and `distributorFee` for the future update. This data is packed as follows:
+
+```solidity
+ function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+ ) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
+ uint256[] memory recipients = new uint[](recipientsSize);
+
+ uint256 i = 0;
+ for (; i < recipientsSize;) {
+ recipients[i] = (uint256(percentAllocations[i]) << ADDRESS_BITS) | uint256(uint160(accounts[i]));
+
+ unchecked {
+ i++;
+ }
+ }
+
+ data = abi.encodePacked(splitMain, distributorFee, owner, uint8(recipientsSize), recipients);
+ }
+```
+
+In the process, `recipientsSize` is unsafely downcasted into a `uint8`, which has a maximum value of `256`. As a result, any values greater than 256 will overflow and result in a lower value of `recipients.length % 256` being passed as `recipientsSize`.
+
+When the Controller is deployed, the full list of `percentAllocations` is passed to the `validSplit` check, which will pass as expected. However, later, when `updateSplit()` is called, the `getNewSplitConfiguation()` function will only return the first `recipientsSize` accounts, ignoring the rest.
+
+```solidity
+ function getNewSplitConfiguration()
+ public
+ pure
+ returns (address[] memory accounts, uint32[] memory percentAllocations)
+ {
+ // fetch the size first
+ // then parse the data gradually
+ uint256 size = _recipientsSize();
+ accounts = new address[](size);
+ percentAllocations = new uint32[](size);
+
+ uint256 i = 0;
+ for (; i < size;) {
+ uint256 recipient = _getRecipient(i);
+ accounts[i] = address(uint160(recipient));
+ percentAllocations[i] = uint32(recipient >> ADDRESS_BITS);
+ unchecked {
+ i++;
+ }
+ }
+ }
+```
+
+When `updateSplit()` is eventually called on `splitsMain` to turn on fees, the `validSplit()` check on that contract will revert because the sum of the percent allocations will no longer sum to `1e6`, and the update will not be possible.
+
+#### Proof of Concept
+
+The following test can be dropped into a file in `src/test` to demonstrate that passing 400 accounts will result in a `recipientSize` of `400 - 256 = 144`:
+
+```solidity
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.0;
+
+import { Test } from "forge-std/Test.sol";
+import { console } from "forge-std/console.sol";
+import { ImmutableSplitControllerFactory } from "src/controllers/ImmutableSplitControllerFactory.sol";
+import { ImmutableSplitController } from "src/controllers/ImmutableSplitController.sol";
+
+interface ISplitsMain {
+ function createSplit(address[] calldata accounts, uint32[] calldata percentAllocations, uint32 distributorFee, address controller) external returns (address);
+}
+
+contract ZachTest is Test {
+ function testZach_RecipientSizeCappedAt256Accounts() public {
+ vm.createSelectFork("https://mainnet.infura.io/v3/fb419f740b7e401bad5bec77d0d285a5");
+
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](400);
+ uint32[] memory bigPercentAllocations = new uint32[](400);
+
+ for (uint i = 0; i < 400; i++) {
+ bigAccounts[i] = address(uint160(i));
+ bigPercentAllocations[i] = 2500;
+ }
+
+ // confirmation that 0xSplits will allow creating a split with this many accounts
+ // dummy acct passed as controller, but doesn't matter for these purposes
+ address split = ISplitsMain(0x2ed6c4B5dA6378c7897AC67Ba9e43102Feb694EE).createSplit(bigAccounts, bigPercentAllocations, 0, address(8888));
+
+ ImmutableSplitController controller = factory.createController(split, owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+
+ // added a public function to controller to read recipient size directly
+ uint savedRecipientSize = controller.ZachTest__recipientSize();
+ assert(savedRecipientSize < 400);
+ console.log(savedRecipientSize); // 144
+ }
+}
+```
+
+#### Recommendation
+
+When packing the data in `_packSplitControllerData()`, check `recipientsSize` before downcasting to a uint8:
+
+```diff
+function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
++ if (recipientsSize > 256) revert InvalidSplit__TooManyAccounts(recipientSize);
+ ...
+}
+```
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[M-03] In a mass slashing event, node operators are incentivized to get slashed
+
+When the `OptimisticWithdrawalRecipient` receives funds from the beacon chain, it uses the following rule to determine the allocation:
+
+> If the amount of funds to be distributed is greater than or equal to 16 ether, it is assumed that it is a withdrawal (to be returned to the principal, with a cap on principal withdrawals of the total amount they deposited).
+
+> Otherwise, it is assumed that the funds are rewards.
+
+This value being as low as 16 ether protects against any predictable attack the node operator could perform. For example, due to the effect of hysteresis in updating effective balances, it does not seem to be possible for node operators to predictably bleed a withdrawal down to be below 16 ether (even if they timed a slashing perfectly).
+
+However, in the event of a mass slashing event, slashing punishments can be much more severe than they otherwise would be. To calculate the size of a slash, we:
+
+* take the total percentage of validator stake slashed in the 18 days preceding and following a user's slash
+* multiply this percentage by 3 (capped at 100%)
+* the full slashing penalty for a given validator equals 1/32 of their stake, plus the resulting percentage above applied to the remaining 31/32 of their stake
+
+In order for such penalties to bring the withdrawal balance below 16 ether (assuming a full 32 ether to start), we would need the percentage taken to be greater than `15 / 31 = 48.3%`, which implies that `48.3 / 3 = 16.1%` of validators would need to be slashed.
+
+Because the measurement is taken from the 18 days before and after the incident, node operators would have the opportunity to see a mass slashing event unfold, and later decide that they would like to be slashed along with it.
+
+In the event that they observed that greater than 16.1% of validators were slashed, Obol node operators would be able to get themselves slashed, be exited with a withdrawal of less than 16 ether, and claim that withdrawal as rewards, effectively stealing from the principal recipient.
+
+#### Recommendations
+
+Find a solution that provides a higher level of guarantee that the funds withdrawn are actually rewards, and not a withdrawal.
+
+#### Review
+
+Acknowledged. We believe this is a black swan event. It would require a major ETH client to be compromised, and would be a betrayal of trust, so likely not EV+ for doxxed operators. Users of this contract with unknown operators should be wary of such a risk.
+
+### \[L-01] Obol fees will be applied retroactively to all non-distributed funds in the Splitter
+
+When Obol decides to turn on fees, a call will be made to `ImmutableSplitController::updateSplit()`, which will take the predefined split parameters (the original user specified split with Obol's fees added in) and call `updateSplit()` to implement the change.
+
+```solidity
+function updateSplit() external payable {
+ if (msg.sender != owner()) revert Unauthorized();
+
+ (address[] memory accounts, uint32[] memory percentAllocations) = getNewSplitConfiguration();
+
+ ISplitMain(splitMain()).updateSplit(split, accounts, percentAllocations, uint32(distributorFee()));
+}
+```
+
+If we look at the code on `SplitsMain`, we can see that this `updateSplit()` function is applied retroactively to all funds that are already in the split, because it updates the parameters without performing a distribution first:
+
+```solidity
+function updateSplit(
+ address split,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+)
+ external
+ override
+ onlySplitController(split)
+ validSplit(accounts, percentAllocations, distributorFee)
+{
+ _updateSplit(split, accounts, percentAllocations, distributorFee);
+}
+```
+
+This means that any funds that have been sent to the split but have not yet be distributed will be subject to the Obol fee. Since these splitters will be accumulating all execution layer fees, it is possible that some of them may have received large MEV bribes, where this after-the-fact fee could be quite expensive.
+
+#### Recommendation
+
+The most strict solution would be for the `ImmutableSplitController` to store both the old split parameters and the new parameters. The old parameters could first be used to call `distributeETH()` on the split, and then `updateSplit()` could be called with the new parameters.
+
+If storing both sets of values seems too complex, the alternative would be to require that `split.balance <= 1` to update the split. Then the Obol team could simply store the old parameters off chain to call `distributeETH()` on each split to "unlock" it to update the fees.
+
+(Note that for the second solution, the ETH balance should be less than or equal to 1, not 0, because 0xSplits stores empty balances as `1` for gas savings.)
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[L-02] If OWR is used with rebase tokens and there's a negative rebase, principal can be lost
+
+The `OptimisticWithdrawalRecipient` is deployed with a specific token immutably set on the clone. It is presumed that that token will usually be ETH, but it can also be an ERC20 to account for future integrations with tokenized versions of ETH.
+
+In the event that one of these integrations used a rebasing version of ETH (like `stETH`), the architecture would need to be set up as follows:
+
+`OptimisticWithdrawalRecipient => rewards to something like LidoSplit.sol => Split Wallet`
+
+In this case, the OWR would need to be able to handle rebasing tokens.
+
+In the event that rebasing tokens are used, there is the risk that slashing or inactivity leads to a period with a negative rebase. In this case, the following chain of events could happen:
+
+* `distribute(PULL)` is called, setting `fundsPendingWithdrawal == balance`
+* rebasing causes the balance to decrease slightly
+* `distribute(PULL)` is called again, so when `fundsToBeDistributed = balance - fundsPendingWithdrawal` is calculated in an unchecked block, it ends up being near `type(uint256).max`
+* since this is more than `16 ether`, the first `amountOfPrincipalStake - _claimedPrincipalFunds` will be allocated to the principal recipient, and the rest to the reward recipient
+* we check that `endingDistributedFunds <= type(uint128).max`, but unfortunately this check misses the issue, because only `fundsToBeDistributed` underflows, not `endingDistributedFunds`
+* `_claimedPrincipalFunds` is set to `amountOfPrincipalStake`, so all future claims will go to the reward recipient
+* the `pullBalances` for both recipients will be set higher than the balance of the contract, and so will be unusable
+
+In this situation, the only way for the principal to get their funds back would be for the full `amountOfPrincipalStake` to hit the contract at once, and for them to call `withdraw()` before anyone called `distribute(PUSH)`. If anyone was to be able to call `distribute(PUSH)` before them, all principal would be sent to the reward recipient instead.
+
+#### Recommendation
+
+Similar to #74, I would recommend removing the ability for the `OptimisticWithdrawalRecipient` to accept non-ETH tokens.
+
+Otherwise, I would recommend two changes for redundant safety:
+
+1. Do not allow the OWR to be used with rebasing tokens.
+2. Move the `_fundsToBeDistributed = _endingDistributedFunds - _startingDistributedFunds;` out of the unchecked block. The case where `_endingDistributedFunds` underflows is already handled by a later check, so this one change should be sufficient to prevent any risk of this issue.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[L-03] LidoSplit can receive ETH, which will be locked in contract
+
+Each new `LidoSplit` is deployed as a clone, which comes with a `receive()` function for receiving ETH.
+
+However, the only function on `LidoSplit` is `distribute()`, which converts `stETH` to `wstETH` and transfers it to the `splitWallet`.
+
+While this contract should only be used for Lido to pay out rewards (which will come in `stETH`), it seems possible that users may accidentally use the same contract to receive other validator rewards (in ETH), or that Lido governance may introduce ETH payments in the future, which would cause the funds to be locked.
+
+#### Proof of Concept
+
+The following test can be dropped into `LidoSplit.t.sol` to confirm that the clones can currently receive ETH:
+
+```solidity
+function testZach_CanReceiveEth() public {
+ uint before = address(lidoSplit).balance;
+ payable(address(lidoSplit)).transfer(1 ether);
+ assertEq(address(lidoSplit).balance, before + 1 ether);
+}
+```
+
+#### Recommendation
+
+Introduce an additional function to `LidoSplit.sol` which wraps ETH into stETH before calling `distribute()`, in order to rescue any ETH accidentally sent to the contract.
+
+#### Review
+
+Fixed in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87/files) by adding a `rescueFunds()` function that can send ETH or any ERC20 (except `stETH` or `wstETH`) to the `splitWallet`.
+
+### \[L-04] Upgrade to latest version of Solady to fix LibClone bug
+
+In the recent [Solady audit](https://github.com/Vectorized/solady/blob/main/audits/cantina-solady-report.pdf), an issue was found the affects LibClone.
+
+In short, LibClone assumes that the length of the immutable arguments on the clone will fit in 2 bytes. If it's larger, it overlaps other op codes and can lead to strange behaviors, including causing the deployment to fail or causing the deployment to succeed with no resulting bytecode.
+
+Because the `ImmutableSplitControllerFactory` allows the user to input arrays of any length that will be encoded as immutable arguments on the Clone, we can manipulate the length to accomplish these goals.
+
+Fortunately, failed deployments or empty bytecode (which causes a revert when `init()` is called) are not problems in this case, as the transactions will fail, and it can only happen with unrealistically long arrays that would only be used by malicious users.
+
+However, it is difficult to be sure how else this risk might be exploited by using the overflow to jump to later op codes, and it is recommended to update to a newer version of Solady where the issue has been resolved.
+
+#### Proof of Concept
+
+If we comment out the `init()` call in the `createController()` call, we can see that the following test "successfully" deploys the controller, but the result is that there is no bytecode:
+
+```solidity
+function testZach__CreateControllerSoladyBug() public {
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](28672);
+ uint32[] memory bigPercentAllocations = new uint32[](28672);
+
+ for (uint i = 0; i < 28672; i++) {
+ bigAccounts[i] = address(uint160(i));
+ if (i < 32) bigPercentAllocations[i] = 820;
+ else bigPercentAllocations[i] = 34;
+ }
+
+ ImmutableSplitController controller = factory.createController(address(8888), owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+ assert(address(controller) != address(0));
+ assert(address(controller).code.length == 0);
+}
+```
+
+#### Recommendation
+
+Delete Solady and clone it from the most recent commit, or any commit after the fixes from [PR #548](https://github.com/Vectorized/solady/pull/548/files#diff-27a3ba4730de4b778ecba4697ab7dfb9b4f30f9e3666d1e5665b194fe6c9ae45) were merged.
+
+#### Review
+
+Solady has been updated to v.0.0.123 in [PR 88](https://github.com/ObolNetwork/obol-manager-contracts/pull/88).
+
+### \[G-01] stETH and wstETH addresses can be saved on implementation to save gas
+
+The `LidoSplitFactory` contract holds two immutable values for the addresses of the `stETH` and `wstETH` tokens.
+
+When new clones are deployed, these values are encoded as immutable args. This adds the values to the contract code of the clone, so that each time a call is made, they are passed as calldata along to the implementation, which reads the values from the calldata for use.
+
+Since these values will be consistent across all clones on the same chain, it would be more gas efficient to store them in the implementation directly, which can be done with `immutable` storage values, set in the constructor.
+
+This would save 40 bytes of calldata on each call to the clone, which leads to a savings of approximately 640 gas on each call.
+
+#### Recommendation
+
+1. Add the following to `LidoSplit.sol`:
+
+```solidity
+address immutable public stETH;
+address immutable public wstETH;
+```
+
+2. Add a constructor to `LidoSplit.sol` which sets these immutable values. Solidity treats immutable values as constants and stores them directly in the contract bytecode, so they will be accessible from the clones.
+3. Remove `stETH` and `wstETH` from `LidoSplitFactory.sol`, both as storage values, arguments to the constructor, and arguments to `clone()`.
+4. Adjust the `distribute()` function in `LidoSplit.sol` to read the storage values for these two addresses, and remove the helper functions to read the clone's immutable arguments for these two values.
+
+#### Review
+
+Fixed as recommended in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87).
+
+### \[G-02] OWR can be simplified and save gas by not tracking distributedFunds
+
+Currently, the `OptimisticWithdrawalRecipient` contract tracks four variables:
+
+* distributedFunds: total amount of the token distributed via push or pull
+* fundsPendingWithdrawal: total balance distributed via pull that haven't been claimed yet
+* claimedPrincipalFunds: total amount of funds claimed by the principal recipient
+* pullBalances: individual pull balances that haven't been claimed yet
+
+When `_distributeFunds()` is called, we perform the following math (simplified to only include relevant updates):
+
+```solidity
+endingDistributedFunds = distributedFunds - fundsPendingWithdrawal + currentBalance;
+fundsToBeDistributed = endingDistributedFunds - distributedFunds;
+distributedFunds = endingDistributedFunds;
+```
+
+As we can see, `distributedFunds` is added to the `endingDistributedFunds` variable and then removed when calculating `fundsToBeDistributed`, having no impact on the resulting `fundsToBeDistributed` value.
+
+The `distributedFunds` variable is not read or used anywhere else on the contract.
+
+#### Recommendation
+
+We can simplify the math and save substantial gas (a storage write plus additional operations) by not tracking this value at all.
+
+This would allow us to calculate `fundsToBeDistributed` directly, as follows:
+
+```solidity
+fundsToBeDistributed = currentBalance - fundsPendingWithdrawal;
+```
+
+#### Review
+
+Fixed as recommended in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85).
+
+### \[I-01] Strong trust assumptions between validators and node operators
+
+It is assumed that validators and node operators will always act in the best interest of the group, rather than in their selfish best interest.
+
+It is important to make clear to users that there are strong trust assumptions between the various parties involved in the DVT.
+
+Here are a select few examples of attacks that a malicious set of node operators could perform:
+
+1. Since there is currently no mechanism for withdrawals besides the consensus of the node operators, a minority of them sufficient to withhold consensus could blackmail the principal for a payment of up to 16 ether in order to allow them to withdraw. Otherwise, they could turn off their node operators and force the principal to bleed down to a final withdrawn balance of just over 16 ether.
+2. Node operators are all able to propose blocks within the P2P network, which are then propogated out to the rest of the network. Node software is accustomed to signing for blocks built by block builders based on the metadata including quantity of fees and the address they'll be sent to. This is enforced by social consensus, with block builders not wanting to harm validators in order to have their blocks accepted in the future. However, node operators in a DVT are not concerned with the social consensus of the network, and could therefore build blocks that include large MEV payments to their personal address (instead of the DVT's 0xSplit), add fictious metadata to the block header, have their fellow node operators accept the block, and take the MEV for themselves.
+3. While the withdrawal address is immutably set on the beacon chain to the OWR, the fee address is added by the nodes to each block. Any majority of node operators sufficient to reach consensus could create a new 0xSplit with only themselves on it, and use that for all execution layer fees. The principal (and other node operators) would not be able to stop them or withdraw their principal, and would be stuck with staked funds paying fees to the malicious node operators.
+
+Note that there are likely many other possible attacks that malicious node operators could perform. This report is intended to demonstrate some examples of the trust level that is needed between validators and node operators, and to emphasize the importance of making these assumptions clear to users.
+
+#### Review
+
+Acknowledged. We believe EIP 7002 will reduce this trust assumption as it would enable the validator exit via the execution layer withdrawal key.
+
+### \[I-02] Provide node operator checklist to validate setup
+
+There are a number of ways that the user setting up the DVT could plant backdoors to harm the other users involved in the DVT.
+
+Each of these risks is possible to check before signing off on the setup, but some are rather hidden, so it would be useful for the protocol to provide a list of checks that node operators should do before signing off on the setup parameters (or, even better, provide these checks for them through the front end).
+
+1. Confirm that `SplitsMain.getHash(split)` matches the hash of the parameters that the user is expecting to be used.
+2. Confirm that the controller clone delegates to the correct implementation. If not, it could be pointed to delegate to `SplitMain` and then called to `transferControl()` to a user's own address, allowing them to update the split arbitrarily.
+3. `OptimisticWithdrawalRecipient.getTranches()` should be called to check that `amountOfPrincipalStake` is equal to the amount that they will actually be providing.
+4. The controller's `owner` and future split including Obol fees should be provided to the user. They should be able to check that `ImmutableSplitControllerFactory.predictSplitControllerAddress()`, with those parameters inputted, results in the controller that is actually listed on `SplitsMain.getController(split)`.
+
+#### Review
+
+Acknowledged. We do some of these already (will add the remainder) automatically in the launchpad UI during the cluster confirmation phase by the node operator. We will also add it in markdown to the repo.
diff --git a/docs/versioned_docs/version-v0.19.0/sec/threat_model.md b/docs/versioned_docs/version-v0.19.0/sec/threat_model.md
new file mode 100644
index 0000000000..fbca3c7ce8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.0/sec/threat_model.md
@@ -0,0 +1,155 @@
+---
+sidebar_position: 6
+description: Threat model for a Distributed Validator
+---
+
+# Charon threat model
+
+This page outlines a threat model for Charon, in the context of it being a Distributed Validator middleware for Ethereum validator clients.
+
+## Actors
+
+- Node owner (NO)
+- Cluster node operators (CNO)
+- Rogue node operator (RNO)
+- Outside attacker (OA)
+
+## General observations
+
+This page describes some considerations the Obol core team made about the security of a distributed validator in the context of its deployment and interaction with outside actors.
+
+The goal of this threat model is to provide transparency, but it is by no means a comprehensive audit or complete security reference. It’s a sharing of the experiences and thoughts we gained during the last few years building distributed validator technologies.
+
+While to the Beacon Chain, a distributed validator is seen in much the same way as a regular validator, and thus retains some of the same security considerations, Charon’s threat model is different from a validator client’s threat model because of its general design.
+
+While a validator client owns and operates on a set of validator private keys, the design of Charon allows its node operators to rarely (if ever) see the complete validator private keys, relying instead on modern cryptography to generate partial private key shares.
+
+An Ethereum distributed validator employs advanced signature primitives such that no operator ever handles the full validator private key in any standard lifecycle step: the [BLS digital signature scheme](https://en.wikipedia.org/wiki/BLS_digital_signature) employed by the Ethereum network allows distributed validators to individually sign a blob of data and then aggregate the resulting signatures in a transparent manner, never requiring any of the participating parties to know the full private key to do so.
+
+If the subset of the available Charon nodes is lower than a given threshold, the cluster is not able to continue with its duties.
+
+Given the collaborative nature of a Distributed Validator cluster, every operator must prioritize the liveliness and well-being of the cluster. Charon, at the moment of writing this page cannot reward and penalize operators within a cluster independently.
+
+This implies that Charon’s threat model can’t quite be equated to that of a single validator client, since they work on a different - albeit similar - set of security concepts.
+
+## Identity private key
+
+A distributed validator cluster is made up of a number of nodes, often run by a number of independent operators. For each DV cluster there’s a set of Ethereum validator private keys on which they want to validate on behalf of.
+
+Alongside those, each node (henceforth ‘operator’) holds an SECP256K1 identity private key, referred to as an ENR, that identifies their node to the other cluster operators’ nodes.
+
+Exfiltration of said private key could lead to possible impersonation from an outside attacker, possibly leading to intra-cluster peering issues, eclipse attack risks, and degraded validator performance.
+
+Charon client communication is handled via BFT consensus, which is able to tolerate a given number of misbehaving nodes up to a certain threshold: once this threshold is reached, the cluster is not able to continue with its lifecycle and loses liveness guarantees (the validator goes offline). If more than two-thirds of nodes in a cluster are malicious, a cluster also loses safety guarantees (enough bad actors could collude to come to consensus on something slashable).
+
+Identity private key theft and the subsequent execution of a rogue cluster node is equivalent in the context of BFT consensus to a misbehaving node, hence the cluster can survive and continue with its duties up to what’s specified by the cluster’s BFT protocol’s parameters.
+
+The likelihood of this happening is low: an OA with enough knowledge of the topology of the operator’s network must steal `fault tolerance of the cluster + 1` identity private keys and run Charon nodes to subvert the distributed validator BFT consensus to push the validator offline.
+
+## Ethereum validator private key access
+
+A distributed validator cluster executes Ethereum validator duties by acting as a middleman between the beacon chain and a validator client.
+
+To do so, the cluster must have knowledge of the Ethereum validator’s private key.
+
+The design and implementation of Charon minimizes the chances of this by splitting the Ethereum validator private keys into parts, which are then assigned to each node operator.
+A [distributed key generation](https://en.wikipedia.org/wiki/Distributed_key_generation) (DKG) process is used in order to evenly and safely create the private key shares without any central party having access to the full private key.
+
+The cryptography primitives employed in Charon can allow a threshold of the node operator’s private key shares to be reconstructed into the whole validator private key if needed.
+
+While the facilities to do this are present in the form of CLI commands, as stated before Charon never reconstructs the key in normal operations since BLS digital signature system allows for signature aggregation.
+
+A distributed validator cluster can be started in two ways:
+
+1. An existing Ethereum validator private key is split by the private key holder, and distributed in a trusted manner among the operators.
+2. The operators participate in a distributed key generation (DKG) process, to create private key shares that collectively can be used to sign validation duties as an Ethereum distributed validator. The full private key for the cluster never exists in one place during or after the DKG.
+
+In case 1, one of the node operators K has direct access to the Ethereum validator key and is tasked with the generation of other operator’s identity keys and key shards.
+
+It is clear that in this case the entirety of the sensitive material set is as secure as K’s environment; if K is compromised or malicious, the distributed validator could be slashed.
+
+Case 2 is different, because there’s no pre-existing Ethereum validator key in a single operator's hands: it will be generated using the FROST DKG algorithm.
+
+Assuming a successful DKG process, each operator will only ever handle its own key shares instead of the full Ethereum validator private key.
+
+A set of rogue operators composed of enough members to reconstruct the original Ethereum private keys might pose the risk of slashing for a distributed validator by colluding to produce slashable messages together.
+
+We deem this scenario’s likelihood as low, as it would mean that node operators decided to willfully slash the stake that they should be being rewarded for staking.
+
+Still, in the context of an outside attack, purposefully slashing a validator would mean stealing multiple operator key shares, which in turn means violating many cluster operator’s security almost at the same time. This scenario may occur if there is a 0-day vulnerability in a piece of software they all run or in case of node misconfiguration.
+
+## Rogue node operator
+
+Nodes are connected by means of either relay nodes, or directly to one another.
+
+Each node operator is at risk of being impeded by other nodes or by the relay operator in the execution of their duties.
+
+Nodes need to expose a set of TCP ports to be able to work, and the mere fact of doing that opens up the opportunity for rogue parties to execute DDoS attacks.
+
+Another attack surface for the cluster exists in rogue nodes purposefully filling the various inter-state databases with meaningless data, or more generally submitting bogus information to the other parties to slow down the processing or, in the case of a sybil attack, bring the cluster to a halt.
+
+The likelihood of this scenario is medium, because there’s no active threat hunting part: there’s no need for the rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle.
+
+## Outside attackers interfering with a cluster
+
+There are two levels of sophistication in an OA:
+
+1. No knowledge of the topology of the cluster: The attacker doesn’t know where each cluster node is located and so can’t force fault tolerance +1 nodes offline if it can’t find them.
+2. Knowledge of the topology of the network (or part of it) is possessed: the OA can operate DDoS attacks or try breaking into node’s servers - at that point, the “rogue node operator” scenario applies.
+
+The likelihood of this scenario is low: an OA needs extensive capabilities and sufficient incentive to be able to carry out an attack of this size.
+
+An outside attacker could also find and use vulnerabilities in the underlying cryptosystems and cryptography libraries used by Charon and other Ethereum clients. Forging signatures that fool Charon’s cryptographic library or other dependencies may be feasible, but forging signatures or otherwise finding a vulnerability in either the SECP256K1+ECDSA or BLS12-381+BLS cryptosystems we deem to be a low likelihood risk.
+
+## Malicious beacon nodes
+
+A malicious beacon node (BN) could prevent the distributed validator from operating its validation duties, and could plausibly increase the likelihood of slashing by serving charon illegitimate information.
+
+If the amount of nodes configured with the malicious BN are equal to the byzantine threshold for the Charon BFT consensus protocol, the validation process can potentially halt since the BFT parameter threshold is reached - most of the nodes are byzantine - the system will reach consensus on a set of data that isn’t valid.
+
+We deem the likelihood of this scenario to be medium depending on the trust model associated with the BNs deployment (cloud, self-hosted, SaaS product): node operators should always host or at least trust their own beacon nodes.
+
+## Malicious charon relays
+
+A Charon relay is used as a communication bridge between nodes that aren’t directly exposed on the Internet. It also acts as the peer discovery mechanism for a cluster.
+
+Once a peer’s IP address has been discovered via the relay, a direct connection can be attempted. Nodes can either communicate by exchanging data through a relay, or by using the relay as a means to establish a direct TCP connection to one another.
+
+A malicious relay owned by a OA could lead to:
+
+- Network topology discovery, facilitating the “outside attackers interactions with a cluster” scenario
+- Impeding node communication, potentially impacting the BFT consensus protocol liveness (not security) and distributed validator duties
+- DKG process disruption leading to frustration and potential abandonment by node operators: could lead to the usage of a standard Ethereum validator setup, which implies weaker security overall
+
+We note that BFT consensus liveness disruption can only happen if the number of nodes using the malicious relay for communication is equal to the byzantine nodes amount defined in the consensus parameters.
+
+This risk can be mitigated by configuring nodes with multiple relay URLs from only [trusted entities](../int/quickstart/advanced/self-relay.md).
+
+The likelihood of this scenario is medium: Charon nodes are configured with a default set of relay nodes, so if an OA were to compromise those, it would lead to many cluster topologies getting discovered and potentially attacked and disrupted.
+
+## Compromised runtime files
+
+Charon operates with two runtime files:
+
+- A lock file used to address operator’s nodes, define the Ethereum validator public keys and the public key shares associated with it
+- A cluster definition file used to define the operator’s addresses and identities during the DKG process
+
+The lock file is signed and validated by all the nodes participating in the cluster: assuming good security practices on the node operator side, and no bugs in Charon or its dependencies’ implementations, this scenario is unlikely.
+
+If one or more node operators are using less than ideal security practices an OA could rewire the Charon CLI flags to include the `--no-verify` flags, which disables lock file signature and hash verification (usually intended only for development purposes).
+
+By doing that, the OA can edit the lock file as it sees fit, leading to the “rogue node operator” scenario. An OA or RNO might also manage to social engineer their way into convincing other operators into running their malicious lock file with verification disabled.
+
+The likelihood of this scenario is low: an OA would need to compromise every node operator through social engineering to both use a different set of files, and to run its cluster with `--no-verify`.
+
+## Conclusions
+
+Distributed Validator Technology (DVT) helps maintain a high-assurance environment for Ethereum validators by leveraging modern cryptography to ensure no single point of failure is easily found in the system.
+
+As with any computing system, security considerations are to be expected in order to keep the environment safe.
+
+From the point of view of an Ethereum validator entity, running their services with a DV client can help greatly with availability, minimizing slashing risks, and maximizing participation in the network.
+
+On the other hand, one must take into consideration the risks involved with dishonest cluster operators, as well as rogue third-party beacon nodes or relay providers.
+
+In the end, we believe the benefits of DVT greatly outweigh the potential threats described in this overview.
diff --git a/docs/versioned_docs/version-v0.19.1/README.md b/docs/versioned_docs/version-v0.19.1/README.md
new file mode 100644
index 0000000000..21c115c204
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/README.md
@@ -0,0 +1,2 @@
+# version-v0.19.1
+
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/README.md b/docs/versioned_docs/version-v0.19.1/advanced/README.md
new file mode 100644
index 0000000000..965416d689
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/README.md
@@ -0,0 +1,2 @@
+# advanced
+
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/adv-docker-configs.md b/docs/versioned_docs/version-v0.19.1/advanced/adv-docker-configs.md
new file mode 100644
index 0000000000..d14de53e8b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/adv-docker-configs.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 8
+description: Use advanced docker-compose features to have more flexibility and power to change the default configuration.
+---
+
+# Advanced Docker Configs
+
+:::info
+This section is intended for *docker power users*, i.e., for those who are familiar with working with `docker-compose` and want to have more flexibility and power to change the default configuration.
+:::
+
+We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in `docker-compose.yml` without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
+See [this](https://docs.docker.com/compose/extends/#multiple-compose-files) for more details.
+
+There are some additional compose files in [this repository](https://github.com/ObolNetwork/charon-distributed-validator-node/), `compose-debug.yml` and `docker-compose.override.yml.sample`, along-with the default `docker-compose.yml` file that you can use for this purpose.
+
+- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f compose-debug.yml up
+```
+
+- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
+
+- To use it, just copy the sample file to `docker-compose.override.yml` and customise it to your liking. Please create this file ONLY when you want to tweak something. This is because the default override file is empty and docker errors if you provide an empty compose file.
+
+```
+cp docker-compose.override.yml.sample docker-compose.override.yml
+
+# Tweak docker-compose.override.yml and then run docker compose up
+docker compose up
+```
+
+- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
+```
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/monitoring.md b/docs/versioned_docs/version-v0.19.1/advanced/monitoring.md
new file mode 100644
index 0000000000..fdbec169b9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/monitoring.md
@@ -0,0 +1,100 @@
+---
+sidebar_position: 4
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+# Getting Started Monitoring your Node
+
+Welcome to this comprehensive guide, designed to assist you in effectively monitoring your Charon cluster and nodes, and setting up alerts based on specified parameters.
+
+## Pre-requisites
+
+Ensure the following software are installed:
+
+- Docker: Find the installation guide for Ubuntu **[here](https://docs.docker.com/engine/install/ubuntu/)**
+- Prometheus: You can install it using the guide available **[here](https://prometheus.io/docs/prometheus/latest/installation/)**
+- Grafana: Follow this **[link](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)** to install Grafana
+
+## Import Pre-Configured Charon Dashboards
+
+- Navigate to the **[repository](https://github.com/ObolNetwork/monitoring/tree/main/dashboards)** that contains a variety of Grafana dashboards. For this demonstration, we will utilize the Charon Dashboard json.
+
+- In your Grafana interface, create a new dashboard and select the import option.
+
+- Copy the content of the Charon Dashboard json from the repository and paste it into the import box in Grafana. Click "Load" to proceed.
+
+- Finalize the import by clicking on the "Import" button. At this point, your dashboard should begin displaying metrics. Ensure your Charon client and Prometheus are operational for this to occur.
+
+## Example Alerting Rules
+
+To create alerts for Node-Exporter, follow these steps based on the sample rules provided on the "Awesome Prometheus alerts" page:
+
+1. Visit the **[Awesome Prometheus alerts](https://samber.github.io/awesome-prometheus-alerts/rules.html#host-and-hardware)** page. Here, you will find lists of Prometheus alerting rules categorized by hardware, system, and services.
+
+2. Depending on your need, select the category of alerts. For example, if you want to set up alerts for your system's CPU usage, click on the 'CPU' under the 'Host & Hardware' category.
+
+3. On the selected page, you'll find specific alert rules like 'High CPU Usage'. Each rule will provide the PromQL expression, alert name, and a brief description of what the alert does. You can copy these rules.
+
+4. Paste the copied rules into your Prometheus configuration file under the `rules` section. Make sure you understand each rule before adding it to avoid unnecessary alerts.
+
+5. Finally, save and apply the configuration file. Prometheus should now trigger alerts based on these rules.
+
+
+For alerts specific to Charon/Alpha, refer to the alerting rules available on this [ObolNetwork/monitoring](https://github.com/ObolNetwork/monitoring/tree/main/alerting-rules).
+
+## Understanding Alert Rules
+
+1. `ClusterBeaconNodeDown`This alert is activated when the beacon node in a specified Alpha cluster is offline. The beacon node is crucial for validating transactions and producing new blocks. Its unavailability could disrupt the overall functionality of the cluster.
+2. `ClusterBeaconNodeSyncing`This alert indicates that the beacon node in a specified Alpha cluster is synchronizing, i.e., catching up with the latest blocks in the cluster.
+3. `ClusterNodeDown`This alert is activated when a node in a specified Alpha cluster is offline.
+4. `ClusterMissedAttestations`:This alert indicates that there have been missed attestations in a specified Alpha cluster. Missed attestations may suggest that validators are not operating correctly, compromising the security and efficiency of the cluster.
+5. `ClusterInUnknownStatus`: This alert is designed to activate when a node within the cluster is detected to be in an unknown state. The condition is evaluated by checking whether the maximum of the app_monitoring_readyz metric is 0.
+6. `ClusterInsufficientPeers`:This alert is set to activate when the number of peers for a node in the Alpha M1 Cluster #1 is insufficient. The condition is evaluated by checking whether the maximum of the **`app_monitoring_readyz`** equals 4.
+7. `ClusterFailureRate`: This alert is activated when the failure rate of the Alpha M1 Cluster #1 exceeds a certain threshold.
+8. `ClusterVCMissingValidators`: This alert is activated if any validators in the Alpha M1 Cluster #1 are missing.
+9. `ClusterHighPctFailedSyncMsgDuty`: This alert is activated if a high percentage of sync message duties failed in the cluster. The alert is activated if the sum of the increase in failed duties tagged with "sync_message" in the last hour divided by the sum of the increase in total duties tagged with "sync_message" in the last hour is greater than 0.1.
+10. `ClusterNumConnectedRelays`: This alert is activated if the number of connected relays in the cluster falls to 0.
+11. PeerPingLatency: 1. This alert is activated if the 90th percentile of the ping latency to the peers in a cluster exceeds 500ms within 2 minutes.
+
+## Best Practices for Monitoring Charon Nodes & Cluster
+
+- **Establish Baselines**: Familiarize yourself with the normal operation metrics like CPU, memory, and network usage. This will help you detect anomalies.
+- **Define Key Metrics**: Set up alerts for essential metrics, encompassing both system-level and Charon-specific ones.
+- **Configure Alerts**: Based on these metrics, set up actionable alerts.
+- **Monitor Network**: Regularly assess the connectivity between nodes and the network.
+- **Perform Regular Health Checks**: Consistently evaluate the status of your nodes and clusters.
+- **Monitor System Logs**: Keep an eye on logs for error messages or unusual activities.
+- **Assess Resource Usage**: Ensure your nodes are neither over- nor under-utilized.
+- **Automate Monitoring**: Use automation to ensure no issues go undetected.
+- **Conduct Drills**: Regularly simulate failure scenarios to fine-tune your setup.
+- **Update Regularly**: Keep your nodes and clusters updated with the latest software versions.
+
+## Third-Party Services for Uptime Testing
+
+- [updown.io](https://updown.io/)
+- [Grafana synthetic Monitoring](https://grafana.com/grafana/plugins/grafana-synthetic-monitoring-app/)
+
+## Key metrics to watch to verify node health based on jobs
+
+- Node Exporter:
+
+**CPU Usage**: High or spiking CPU usage can be a sign of a process demanding more resources than it should.
+
+**Memory Usage**: If a node is consistently running out of memory, it could be due to a memory leak or simply under-provisioning.
+
+**Disk I/O**: Slow disk operations can cause applications to hang or delay responses. High disk I/O can indicate storage performance issues or a sign of high load on the system.
+
+**Network Usage**: High network traffic or packet loss can signal network configuration issues, or that a service is being overwhelmed by requests.
+
+**Disk Space**: Running out of disk space can lead to application errors and data loss.
+
+**Uptime**: The amount of time a system has been up without any restarts. Frequent restarts can indicate instability in the system.
+
+**Error Rates**: The number of errors encountered by your application. This could be 4xx/5xx HTTP errors, exceptions, or any other kind of error your application may log.
+
+**Latency**: The delay before a transfer of data begins following an instruction for its transfer.
+
+It is also important to check:
+
+- NTP clock skew
+- Process restarts and failures (eg. through `node_systemd`)
+- alert on high error and panic log counts.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/obol-monitoring.md b/docs/versioned_docs/version-v0.19.1/advanced/obol-monitoring.md
new file mode 100644
index 0000000000..8d9e0ceca1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/obol-monitoring.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 5
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+
+# Push Metrics to Obol Monitoring
+
+:::info
+This is **optional** and does not confer any special privileges within the Obol Network.
+:::
+
+You may have been provided with **Monitoring Credentials** used to push distributed validator metrics to Obol's central prometheus cluster to monitor, analyze, and improve your Distributed Validator Cluster's performance.
+
+The provided credentials needs to be added in `prometheus/prometheus.yml` replacing `$PROM_REMOTE_WRITE_TOKEN` and will look like:
+```
+obol20!tnt8U!C...
+```
+
+The updated `prometheus/prometheus.yml` file should look like:
+```
+global:
+ scrape_interval: 30s # Set the scrape interval to every 30 seconds.
+ evaluation_interval: 30s # Evaluate rules every 30 seconds.
+
+remote_write:
+ - url: https://vm.monitoring.gcp.obol.tech/write
+ authorization:
+ credentials: obol20!tnt8U!C...
+
+scrape_configs:
+ - job_name: 'charon'
+ static_configs:
+ - targets: ['charon:3620']
+ - job_name: "lodestar"
+ static_configs:
+ - targets: [ "lodestar:5064" ]
+ - job_name: 'node-exporter'
+ static_configs:
+ - targets: ['node-exporter:9100']
+```
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/quickstart-builder-api.md b/docs/versioned_docs/version-v0.19.1/advanced/quickstart-builder-api.md
new file mode 100644
index 0000000000..569062ca65
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/quickstart-builder-api.md
@@ -0,0 +1,163 @@
+---
+sidebar_position: 2
+description: Run a distributed validator cluster with the builder API (MEV-Boost)
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Enable MEV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+This quickstart guide focuses on configuring the builder API for Charon and supported validator and consensus clients.
+
+## Getting started with Charon & the Builder API
+
+Running a distributed validator cluster with the builder API enabled will give the validators in the cluster access to the builder network. This builder network is a network of "Block Builders"
+who work with MEV searchers to produce the most valuable blocks a validator can propose.
+
+[MEV-Boost](https://boost.flashbots.net/) is one such product from flashbots that enables you to ask multiple
+block relays (who communicate with the "Block Builders") for blocks to propose. The block that pays the largest reward to the validator will be signed and returned to the relay for broadcasting to the wider
+network. The end result for the validator is generally an increased APR as they receive some share of the MEV.
+
+:::info
+Before completing this guide, please check your cluster version, which can be found inside the `cluster-lock.json` file. If you are using cluster-lock version `1.7.0` or higher release verions, Obol seamlessly accommodates all validator client implementations within a mev-enabled distributed validator cluster.
+
+For clusters with a cluster-lock version `1.6.0` and below, charon is compatible only with [Teku](https://github.com/ConsenSys/teku). Use the version history feature of this documentation to see the instructions for configuring a cluster in that manner (`v0.16.0`).
+:::
+
+## Client configuration
+
+:::note
+You need to add CLI flags to your consensus client, charon client, and validator client, to enable the builder API.
+
+You need all operators in the cluster to have their nodes properly configured to use the builder API, or you risk missing a proposal.
+:::
+
+### Charon
+
+Charon supports builder API with the `--builder-api` flag. To use builder API, one simply needs to add this flag to the `charon run` command:
+
+```
+charon run --builder-api
+```
+
+### Consensus Clients
+
+The following flags need to be configured on your chosen consensus client. A flashbots relay URL is provided for example purposes, you should choose a relay that suits your preferences from [this list](https://github.com/eth-educators/ethstaker-guides/blob/main/MEV-relay-list.md#mev-relay-list-for-mainnet).
+
+
+
+ Teku can communicate with a single relay directly:
+
+
+ {String.raw`--builder-endpoint="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`--builder-endpoint=http://mev-boost:18550`}
+
+
+
+
+ Lighthouse can communicate with a single relay directly:
+
+
+ {String.raw`lighthouse bn --builder "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`lighthouse bn --builder "http://mev-boost:18550"`}
+
+
+
+
+
+
+ {String.raw`prysm beacon-chain --http-mev-relay "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+
+ {String.raw`--payload-builder=true --payload-builder-url="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ You should also consider adding --local-block-value-boost 3
as a flag, to favour locally built blocks if they are withing 3% in value of the relay block, to improve the chances of a successful proposal.
+
+
+
+
+ {String.raw`--builder --builder.urls "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+### Validator Clients
+
+The following flags need to be configured on your chosen validator client
+
+
+
+
+
+ {String.raw`teku validator-client --validators-builder-registration-default-enabled=true`}
+
+
+
+
+
+
+
+ {String.raw`lighthouse vc --builder-proposals`}
+
+
+
+
+
+
+ {String.raw`prysm validator --enable-builder`}
+
+
+
+
+
+
+ {String.raw`--payload-builder=true`}
+
+
+
+
+
+
+ {String.raw`--builder="true" --builder.selection="builderonly"`}
+
+
+
+
+
+## Verify your cluster is correctly configured
+
+It can be difficult to confirm everything is configured correctly with your cluster until a proposal opportunity arrives, but here are some things you can check.
+
+When your cluster is running, you should see if charon is logging something like this each epoch:
+```
+13:10:47.094 INFO bcast Successfully submitted validator registration to beacon node {"delay": "24913h10m12.094667699s", "pubkey": "84b_713", "duty": "1/builder_registration"}
+```
+
+This indicates that your charon node is successfully registering with the relay for a blinded block when the time comes.
+
+If you are using the [ultrasound relay](https://relay.ultrasound.money), you can enter your cluster's distributed validator public key(s) into their website, to confirm they also see the validator as correctly registered.
+
+You should check that your validator client's logs look healthy, and ensure that you haven't added a `fee-recipient` address that conflicts with what has been selected by your cluster in your cluster-lock file, as that may prevent your validator from producing a signature for the block when the opportunity arises. You should also confirm the same for all of the other peers in your cluster.
+
+Once a proposal has been made, you should look at the `Block Extra Data` field under `Execution Payload` for the block on [Beaconcha.in](https://beaconcha.in/block/18450364), and confirm there is text present, this generally suggests the block came from a builder, and was not a locally constructed block.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/quickstart-combine.md b/docs/versioned_docs/version-v0.19.1/advanced/quickstart-combine.md
new file mode 100644
index 0000000000..c02d654d84
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/quickstart-combine.md
@@ -0,0 +1,112 @@
+---
+sidebar_position: 9
+description: Combine distributed validator private key shares to recover the validator private key.
+---
+
+# Combine DV private key shares
+
+:::warning
+Reconstituting Distributed Validator private key shares into a standard validator private key is a security risk, and can potentially cause your validator to be slashed.
+
+Only combine private keys as a last resort and do so with extreme caution.
+:::
+
+Combine distributed validator private key shares into an Ethereum validator private key.
+
+## Pre-requisites
+
+- Ensure you have the `.charon` directories of at least a threshold of the cluster's node operators.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Set up the key combination directory tree
+
+Rename each cluster node operator `.charon` directory in a different way to avoid folder name conflicts.
+
+We suggest naming them clearly and distinctly, to avoid confusion.
+
+At the end of this process, you should have a tree like this:
+
+```shell
+$ tree ./cluster
+
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+...
+└── node*
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+:::warning
+Make sure to never mix the various `.charon` directories with one another.
+
+Doing so can potentially cause the combination process to fail.
+:::
+
+## Step 2. Combine the key shares
+
+Run the following command:
+
+```sh
+# Combine a clusters private keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.1 combine --cluster-dir /opt/charon/cluster --output-dir /opt/charon/combined
+```
+
+This command will store the combined keys in the `output-dir`, in this case a folder named `combined`.
+
+```shell
+$ tree combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+
+We can verify that the directory names are correct by looking at the lock file:
+
+```shell
+$ jq .distributed_validators[].distributed_public_key cluster/node0/cluster-lock.json
+"0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd"
+"0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106"
+```
+
+:::info
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+Ensure your distributed validator cluster is completely shut down before starting a replacement validator or you are likely to be slashed.
+:::
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/quickstart-sdk.md b/docs/versioned_docs/version-v0.19.1/advanced/quickstart-sdk.md
new file mode 100644
index 0000000000..96213cbc65
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/quickstart-sdk.md
@@ -0,0 +1,133 @@
+---
+sidebar_position: 1
+description: Create a DV cluster using the Obol Typescript SDK
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Create a DV using the SDK
+
+:::warning
+The Obol-SDK is in a beta state and should be used with caution on testnets only.
+:::
+
+This is a walkthrough of using the [Obol-SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk) to propose a four-node distributed validator cluster for creation using the [DV Launchpad](../dvl/intro.md).
+
+## Pre-requisites
+
+- You have [node.js](https://nodejs.org/en) installed.
+
+## Install the package
+
+Install the Obol-SDK package into your development environment
+
+
+
+
+ npm install --save @obolnetwork/obol-sdk
+
+
+
+
+ yarn add @obolnetwork/obol-sdk
+
+
+
+
+## Instantiate the client
+
+The first thing you need to do is create a instance of the Obol SDK client. The client takes two constructor parameters:
+
+- The `chainID` for the chain you intend to use.
+- An ethers.js [signer](https://docs.ethers.org/v6/api/providers/#Signer-signTypedData) object.
+
+```ts
+import { Client } from "@obolnetwork/obol-sdk";
+import { ethers } from "ethers";
+
+// Create a dummy ethers signer object with a throwaway private key
+const mnemonic = ethers.Wallet.createRandom().mnemonic?.phrase || "";
+const privateKey = ethers.Wallet.fromPhrase(mnemonic).privateKey;
+const wallet = new ethers.Wallet(privateKey);
+const signer = wallet.connect(null);
+
+// Instantiate the Obol Client for goerli
+const obol = new Client({ chainId: 5 }, signer);
+```
+
+## Propose the cluster
+
+List the Ethereum addresses of participating operators, along with withdrawal and fee recipient address data for each validator you intend for the operators to create.
+
+```ts
+// A config hash is a deterministic hash of the proposed DV cluster configuration
+const configHash = await obol.createClusterDefinition({
+ name: "SDK Demo Cluster",
+ operators: [
+ { address: "0xC35CfCd67b9C27345a54EDEcC1033F2284148c81" },
+ { address: "0x33807D6F1DCe44b9C599fFE03640762A6F08C496" },
+ { address: "0xc6e76F72Ea672FAe05C357157CfC37720F0aF26f" },
+ { address: "0x86B8145c98e5BD25BA722645b15eD65f024a87EC" },
+ ],
+ validators: [
+ {
+ fee_recipient_address: "0x3CD4958e76C317abcEA19faDd076348808424F99",
+ withdrawal_address: "0xE0C5ceA4D3869F156717C66E188Ae81C80914a6e",
+ },
+ ],
+});
+
+console.log(
+ `Direct the operators to https://goerli.launchpad.obol.tech/dv?configHash=${configHash} to complete the key generation process`
+);
+```
+
+## Invite the Operators to complete the DKG
+
+Once the Obol-API returns a `configHash` string from the `createClusterDefinition` method, you can use this identifier to invite the operators to the [Launchpad](../dvl/intro.md) to complete the process
+
+1. Operators navigate to `https://.launchpad.obol.tech/dv?configHash=` and complete the [run a DV with others](../start/quickstart_group.md) flow.
+1. Once the DKG is complete, and operators are using the `--publish` flag, the created cluster details will be posted to the Obol API
+1. The creator will be able to retrieve this data with `obol.getClusterLock(configHash)`, to use for activating the newly created validator.
+
+## Retrieve the created Distributed Validators using the SDK
+
+Once the DKG is complete, the proposer of the cluster can retrieve key data such as the validator public keys and their associated deposit data messages.
+
+```js
+const clusterLock = await obol.getClusterLock(configHash);
+```
+
+Reference lock files can be found [here](https://github.com/ObolNetwork/charon/tree/main/cluster/testdata).
+
+## Activate the DVs using the deposit contract
+
+In order to activate the distributed validators, the cluster operator can retrieve the validators' associated deposit data from the lock file and use it to craft transactions to the `deposit()` method on the deposit contract.
+
+```js
+const validatorDepositData =
+ clusterLock.distributed_validators[validatorIndex].deposit_data;
+
+const depositContract = new ethers.Contract(
+ DEPOSIT_CONTRACT_ADDRESS, // 0x00000000219ab540356cBB839Cbe05303d7705Fa for Mainnet, 0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b for Goerli
+ depositContractABI, // https://etherscan.io/address/0x00000000219ab540356cBB839Cbe05303d7705Fa#code for Mainnet, and replace the address for Goerli
+ signer
+);
+
+const TX_VALUE = ethers.parseEther("32");
+
+const tx = await depositContract.deposit(
+ validatorDepositData.pubkey,
+ validatorDepositData.withdrawal_credentials,
+ validatorDepositData.signature,
+ validatorDepositData.deposit_data_root,
+ { value: TX_VALUE }
+);
+
+const txResult = await tx.wait();
+```
+
+## Usage Examples
+
+Examples of how our SDK can be used are found [here](https://github.com/ObolNetwork/obol-sdk-examples).
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/quickstart-split.md b/docs/versioned_docs/version-v0.19.1/advanced/quickstart-split.md
new file mode 100644
index 0000000000..3b0d530f3a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/quickstart-split.md
@@ -0,0 +1,93 @@
+---
+sidebar_position: 3
+description: Split existing validator keys
+---
+
+# Split existing validator private keys
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+
+This process should only be used if you want to split an *existing validator private key* into multiple private key shares for use in a Distributed Validator Cluster. If your existing validator is not properly shut down before the Distributed Validator starts, your validator may be slashed.
+
+If you are starting a new validator, you should follow a [quickstart guide](../start/quickstart_overview.md) instead.
+
+If you use MEV-Boost, make sure you turned off your MEV-Boost service for the time of splitting the keys, otherwise you may hit [this issue](https://github.com/ObolNetwork/charon/issues/2770).
+:::
+
+Split an existing Ethereum validator key into multiple key shares for use in an [Obol Distributed Validator Cluster](../int/key-concepts.md#distributed-validator-cluster).
+
+
+## Pre-requisites
+
+- Ensure you have the existing validator keystores (the ones to split) and passwords.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Clone the charon repo and copy existing keystore files
+
+Clone the [charon](https://github.com/ObolNetwork/charon) repo.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon.git
+
+ # Change directory
+ cd charon/
+
+ # Create a folder within this checked out repo
+ mkdir split_keys
+ ```
+
+Copy the existing validator `keystore.json` files into this new folder. Alongside them, with a matching filename but ending with `.txt` should be the password to the keystore. E.g., `keystore-0.json` `keystore-0.txt`
+
+At the end of this process, you should have a tree like this:
+```shell
+├── split_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ ├── keystore-1.txt
+│ ...
+│ ├── keystore-*.json
+│ ├── keystore-*.txt
+```
+
+## Step 2. Split the keys using the charon docker command
+
+Run the following docker command to split the keys:
+
+```shell
+CHARON_VERSION= # E.g. v0.19.1
+CLUSTER_NAME= # The name of the cluster you want to create.
+WITHDRAWAL_ADDRESS= # The address you want to use for withdrawals.
+FEE_RECIPIENT_ADDRESS= # The address you want to use for fee payments.
+NODES= # The number of nodes in the cluster.
+
+docker run --rm -v $(pwd):/opt/charon obolnetwork/charon:${CHARON_VERSION} create cluster --name="${CLUSTER_NAME}" --withdrawal-addresses="${WITHDRAWAL_ADDRESS}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDRESS}" --split-existing-keys --split-keys-dir=/opt/charon/split_keys --nodes ${NODES} --network goerli
+```
+
+The above command will create `validator_keys` along with `cluster-lock.json` in `./cluster` for each node.
+
+Command output:
+
+```shell
+***************** WARNING: Splitting keys **********************
+ Please make sure any existing validator has been shut down for
+ at least 2 finalised epochs before starting the charon cluster,
+ otherwise slashing could occur.
+****************************************************************
+
+Created charon cluster:
+ --split-existing-keys=true
+
+./cluster/
+├─ node[0-*]/ Directory for each node
+│ ├─ charon-enr-private-key Charon networking private key for node authentication
+│ ├─ cluster-lock.json Cluster lock defines the cluster lock file which is signed by all nodes
+│ ├─ validator_keys Validator keystores and password
+│ │ ├─ keystore-*.json Validator private share key for duty signing
+│ │ ├─ keystore-*.txt Keystore password files for keystore-*.json
+```
+
+These split keys can now be used to start a charon cluster.
diff --git a/docs/versioned_docs/version-v0.19.1/advanced/self-relay.md b/docs/versioned_docs/version-v0.19.1/advanced/self-relay.md
new file mode 100644
index 0000000000..49cc480243
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/advanced/self-relay.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 7
+description: Self-host a relay
+---
+
+# Self-Host a Relay
+
+If you are experiencing connectivity issues with the Obol hosted relays, or you want to improve your clusters latency and decentralization, you can opt to host your own relay on a separate open and static internet port.
+
+```
+# Figure out your public IP
+curl v4.ident.me
+
+# Clone the repo and cd into it.
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+cd charon-distributed-validator-node
+
+# Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname # Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname
+
+nano relay/docker-compose.yml
+
+docker compose -f relay/docker-compose.yml up
+```
+
+Test whether the relay is publicly accessible. This should return an ENR:
+`curl http://replace.with.public.ip.or.hostname:3640/enr`
+
+Ensure the ENR returned by the relay contains the correct public IP and port by decoding it with https://enr-viewer.com/.
+
+Configure **ALL** charon nodes in your cluster to use this relay:
+
+- Either by adding a flag: `--p2p-relays=http://replace.with.public.ip.or.hostname:3640/enr`
+- Or by setting the environment variable: `CHARON_P2P_RELAYS=http://replace.with.public.ip.or.hostname:3640/enr`
+
+Note that a local `relay/.charon/charon-enr-private-key` file will be created next to `relay/docker-compose.yml` to ensure a persisted relay ENR across restarts.
+
+A list of publicly available relays that can be used is maintained [here](../int/faq/risks.md).
diff --git a/docs/versioned_docs/version-v0.19.1/cf/README.md b/docs/versioned_docs/version-v0.19.1/cf/README.md
new file mode 100644
index 0000000000..5e4947f1b9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/cf/README.md
@@ -0,0 +1,2 @@
+# cf
+
diff --git a/docs/versioned_docs/version-v0.19.1/cf/bug-report.md b/docs/versioned_docs/version-v0.19.1/cf/bug-report.md
new file mode 100644
index 0000000000..9a10b3b553
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/cf/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hindrance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualize the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behavior
+
+
+## Current Behavior
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.19.1/cf/feedback.md b/docs/versioned_docs/version-v0.19.1/cf/feedback.md
new file mode 100644
index 0000000000..76042e28aa
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/cf/feedback.md
@@ -0,0 +1,5 @@
+# Feedback
+
+If you have followed our quickstart guides, and whether you succeeded or failed at running the distributed validator successfully, we would like to hear your feedback on the process and where you encountered difficulties.
+- Please let us know by joining and posting on our [Discord](https://discord.gg/n6ebKsX46w).
+- Also, feel free to add issues to our [GitHub repos](https://github.com/ObolNetwork).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.1/charon/README.md b/docs/versioned_docs/version-v0.19.1/charon/README.md
new file mode 100644
index 0000000000..44b46f1797
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/charon/README.md
@@ -0,0 +1,2 @@
+# charon
+
diff --git a/docs/versioned_docs/version-v0.19.1/charon/charon-cli-reference.md b/docs/versioned_docs/version-v0.19.1/charon/charon-cli-reference.md
new file mode 100644
index 0000000000..57c0c6ecfe
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/charon/charon-cli-reference.md
@@ -0,0 +1,361 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+sidebar_position: 5
+---
+
+# CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.19.1`](https://github.com/ObolNetwork/charon/releases/tag/v0.19.1). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+The following are the top-level commands available to use.
+
+```markdown
+charon --help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ alpha Alpha subcommands provide early access to in-development features
+ combine Combines the private key shares of a distributed validator cluster into a set of standard validator private keys.
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Prints a new ENR for this node
+ help Help about any command
+ relay Start a libp2p relay server
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+## The `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+```
+
+### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+```
+
+### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster-lock.json` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --cluster-dir string The target folder to create the cluster in. (default "./")
+ --definition-file string Optional path to a cluster definition file or an HTTP URL. This overrides all other configuration flags.
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for cluster
+ --insecure-keys Generates insecure keystore files. This should never be used. It is not supported on mainnet.
+ --keymanager-addresses strings Comma separated list of keymanager URLs to import validator key shares to. Note that multiple addresses are required, one for each node in the cluster, with node0's keyshares being imported to the first address, node1's keyshares to the second, and so on.
+ --keymanager-auth-tokens strings Authentication bearer tokens to interact with the keymanager URLs. Don't include the "Bearer" symbol, only include the api-token.
+ --name string The cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky.
+ --nodes int The number of charon nodes in the cluster. Minimum is 3.
+ --num-validators int The number of distributed validators needed in the cluster.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version of the custom test network (in hex).
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+ --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, frost (default "default")
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky. (default "mainnet")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings [REQUIRED] Comma-separated list of each operator's Charon ENR address.
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+## The `dkg` subcommand
+
+### Performing a DKG Ceremony
+
+The `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit data for each new distributed validator. The command outputs the `cluster-lock.json` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file or an HTTP URL. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --keymanager-address string The keymanager URL to import validator keyshares.
+ --keymanager-auth-token string Authentication bearer token to interact with keymanager API. Don't include the "Bearer" symbol, only include the api-token.
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --shutdown-delay duration Graceful shutdown delay. (default 1s)
+```
+
+## The `run` subcommand
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster-lock.json` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs.
+ --builder-api Enables the builder api. Will only produce builder blocks. Builder API must also be enabled on the validator client. Beacon node must be connected to a builder-relay to access the builder network.
+ --debug-address string Listening address (ip and port) for the pprof and QBFT debug API.
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing.
+ --jaeger-service string Service name used for jaeger tracing. (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --manifest-file string The path to the cluster manifest file. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-manifest.pb")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus). (default "127.0.0.1:3620")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --private-key-file-lock Enables private key locking to prevent multiple instances using the same key.
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-beacon-mock-fuzz Configures simnet beaconmock to return fuzzed responses.
+ --simnet-slot-duration duration Configures slot duration in simnet beacon mock. (default 1s)
+ --simnet-validator-keys-dir string The directory containing the simnet validator key shares. (default ".charon/validator_keys")
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --synthetic-block-proposals Enables additional synthetic block proposal duties. Used for testing of rare duties.
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version in hex of the custom test network.
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API. (default "127.0.0.1:3600")
+```
+
+## The `combine` subcommand
+
+### Combine distributed validator key shares into a single Validator key
+
+The `combine` command combines many validator key shares into a single Ethereum validator key.
+
+```markdown
+charon combine --help
+Combines the private key shares from a threshold of operators in a distributed validator cluster into a set of validator private keys that can be imported into a standard Ethereum validator client.
+
+Warning: running the resulting private keys in a validator alongside the original distributed validator cluster *will* result in slashing.
+
+Usage:
+ charon combine [flags]
+
+Flags:
+ --cluster-dir string Parent directory containing a number of .charon subdirectories from the required threshold of nodes in the cluster. (default ".charon/cluster")
+ --force Overwrites private keys with the same name if present.
+ -h, --help Help for combine
+ --no-verify Disables cluster definition and lock file verification.
+ --output-dir string Directory to output the combined private keys to. (default "./validator_keys")
+```
+
+To run this command, one needs at least a threshold number of node operator's `.charon` directories, which need to be organized into a single folder:
+
+```shell
+tree ./cluster
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+That is, each operator '.charon' directory must be placed in a parent directory, and renamed to avoid conflicts.
+
+If for example the lock file defines 2 validators, each `validator_keys` directory must contain exactly 4 files, a JSON and TXT file for each validator.
+
+Those files must be named with an increasing index associated with the validator in the lock file, starting from 0.
+
+The chosen folder name does not matter, as long as it's different from `.charon`.
+
+At the end of the process `combine` will create a new set of directories containing one validator key each, named after its public key:
+
+```shell
+charon combine --cluster-dir="./cluster" --output-dir="./combined"
+tree ./combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+By default, the `combine` command will refuse to overwrite any private key that is already present in the destination directory.
+
+To force the process, use the `--force` flag.
+
+:::warning
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+**Ensure your distributed validator cluster is completely shut down for at least two epochs before starting a replacement validator or you are likely to be slashed.**
+:::
+
+## Host a relay
+
+Relays run a libp2p [circuit relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) server that allows charon clusters to perform peer discovery and for charon clients behind strict NAT gateways to be communicated with. If you want to self-host a relay for your cluster(s) the following command will start one.
+
+```markdown
+charon relay --help
+Starts a libp2p relay that charon nodes can use to bootstrap their p2p cluster
+
+Usage:
+ charon relay [flags]
+
+Flags:
+ --auto-p2pkey Automatically create a p2pkey (secp256k1 private key used for p2p authentication and ENR) if none found in data directory. (default true)
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for relay
+ --http-address string Listening address (ip and port) for the relay http server serving runtime ENR. (default "127.0.0.1:3640")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --monitoring-address string Listening address (ip and port) for the prometheus and pprof monitoring http server. (default "127.0.0.1:3620")
+ --p2p-advertise-private-addresses Enable advertising of libp2p auto-detected private addresses. This doesn't affect manually provided p2p-external-ip/hostname.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-max-connections int Libp2p maximum number of peers that can connect to this relay. (default 16384)
+ --p2p-max-reservations int Updates max circuit reservations per peer (each valid for 30min) (default 512)
+ --p2p-relay-loglevel string Libp2p circuit relay log level. E.g., debug, info, warn, error.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+```
+You can also consider adding [alternative public relays](../int/faq/risks.md) to your cluster by specifying a list of `p2p-relays` in [`charon run`](#run-the-charon-middleware).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.1/charon/cluster-configuration.md b/docs/versioned_docs/version-v0.19.1/charon/cluster-configuration.md
new file mode 100644
index 0000000000..5160727369
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/charon/cluster-configuration.md
@@ -0,0 +1,161 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+sidebar_position: 3
+---
+
+# Cluster configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client or cluster.
+
+A charon cluster is configured in two steps:
+
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+In the case of a solo operator running a cluster, the [`charon create cluster`](./charon-cli-reference.md#create-a-full-cluster-locally) command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+## Cluster Definition File
+
+The `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+### Using the CLI
+
+The [`charon create dkg`](./charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) command is used to create the `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The schema of the `cluster-definition.json` is defined as:
+
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "creator": {
+ "address": "0x123..abfc", //ETH1 address of the creator
+ "config_signature": "0x123654...abcedf" // EIP712 Signature of config_hash using creator privkey
+ },
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "config_signature": "0x123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "0x123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.2.0", // Schema version
+ "timestamp": "2022-01-01T12:00:00+00:00", // Creation timestamp
+ "num_validators": 2, // Number of distributed validators to be created in cluster-lock.json
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "validators": [
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ },
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ }
+ ],
+ "dkg_algorithm": "foo_dkg_v1", // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "0xabcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "0xabcdef...abcedef" // Final hash of all fields
+}
+```
+
+### Using the DV Launchpad
+
+- A `leader/creator`, that wishes to coordinate the creation of a new Distributed Validator Cluster navigates to the launchpad and selects "Create new Cluster"
+- The `leader/creator` uses the user interface to configure all of the important details about the cluster including:
+ - The `Withdrawal Address` for the created validators
+ - The `Fee Recipient Address` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialized and merklized to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that there is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the `leader/creator` is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralized backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralization of the launchpad.)
+
+## Cluster Lock File
+
+The `cluster-lock.json` has the following schema:
+
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equal to cluster_definition.num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "abc...fed", "cfd...bfe"], // Length equal to cluster_definition.operators
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
+
+## Cluster Size and Resilience
+
+The cluster size (the number of nodes/operators in the cluster) determines the resilience of the cluster; its ability remain operational under diverse failure scenarios.
+Larger clusters can tolerate more faulty nodes.
+However, increased cluster size implies higher operational costs and potential network latency, which may negatively affect performance
+
+Optimal cluster size is therefore trade-off between resilience (larger is better) vs cost-efficiency and performance (smaller is better).
+
+Cluster resilience can be broadly classified into two categories:
+ - **[Byzantine Fault Tolerance (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault)** - the ability to tolerate nodes that are actively trying to disrupt the cluster.
+ - **[Crash Fault Tolerance (CFT)](https://en.wikipedia.org/wiki/Fault_tolerance)** - the ability to tolerate nodes that have crashed or are otherwise unavailable.
+
+Different cluster sizes tolerate different counts of byzantine vs crash nodes.
+In practice, hardware and software crash relatively frequently, while byzantine behaviour is relatively uncommon.
+However, Byzantine Fault Tolerance is crucial for trust minimised systems like distributed validators.
+Thus, cluster size can be chosen to optimise for either BFT or CFT.
+
+The table below lists different cluster sizes and their characteristics:
+ - `Cluster Size` - the number of nodes in the cluster.
+ - `Threshold` - the minimum number of nodes that must collaborate to reach consensus quorum and to create signatures.
+ - `BFT #` - the maximum number of byzantine nodes that can be tolerated.
+ - `CFT #` - the maximum number of crashed nodes that can be tolerated.
+
+| Cluster Size | Threshold | BFT # | CFT # | Note |
+|--------------|-----------|-------|-------|------------------------------------|
+| 1 | 1 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 2 | 2 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 3 | 2 | 0 | 1 | ⚠️ Warning: CFT but not BFT! |
+| 4 | 3 | 1 | 1 | ✅ CFT and BFT optimal for 1 faulty |
+| 5 | 4 | 1 | 1 | |
+| 6 | 4 | 1 | 2 | ✅ CFT optimal for 2 crashed |
+| 7 | 5 | 2 | 2 | ✅ BFT optimal for 2 byzantine |
+| 8 | 6 | 2 | 2 | |
+| 9 | 6 | 2 | 3 | ✅ CFT optimal for 3 crashed |
+| 10 | 7 | 3 | 3 | ✅ BFT optimal for 3 byzantine |
+| 11 | 8 | 3 | 3 | |
+| 12 | 8 | 3 | 4 | ✅ CFT optimal for 4 crashed |
+| 13 | 9 | 4 | 4 | ✅ BFT optimal for 4 byzantine |
+| 14 | 10 | 4 | 4 | |
+| 15 | 10 | 4 | 5 | ✅ CFT optimal for 5 crashed |
+| 16 | 11 | 5 | 5 | ✅ BFT optimal for 5 byzantine |
+| 17 | 12 | 5 | 5 | |
+| 18 | 12 | 5 | 6 | ✅ CFT optimal for 6 crashed |
+| 19 | 13 | 6 | 6 | ✅ BFT optimal for 6 byzantine |
+| 20 | 14 | 6 | 6 | |
+| 21 | 14 | 6 | 7 | ✅ CFT optimal for 7 crashed |
+| 22 | 15 | 7 | 7 | ✅ BFT optimal for 7 byzantine |
+
+The table above is determined by the QBFT consensus algorithm with the
+following formulas from [this](https://arxiv.org/pdf/1909.10194.pdf) paper:
+
+```
+n = cluster size
+
+Threshold: min number of honest nodes required to reach quorum given size n
+Quorum(n) = ceiling(2n/3)
+
+BFT #: max number of faulty (byzantine) nodes given size n
+f(n) = floor((n-1)/3)
+
+CFT #: max number of unavailable (crashed) nodes given size n
+crashed(n) = n - Quorum(n)
+```
diff --git a/docs/versioned_docs/version-v0.19.1/charon/dkg.md b/docs/versioned_docs/version-v0.19.1/charon/dkg.md
new file mode 100644
index 0000000000..bcea7c64b0
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/charon/dkg.md
@@ -0,0 +1,73 @@
+---
+description: Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+sidebar_position: 2
+---
+
+# Distributed Key Generation
+
+## Overview
+
+A [**distributed validator key**](../int/key-concepts.md#distributed-validator-key) is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together (4 randomly chosen points on a graph don't all necessarily sit on the same order three curve). To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a [**distributed key generation ceremony**](../int/key-concepts.md#distributed-validator-key-generation-ceremony).
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](../charon/cluster-configuration.md).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign a message with this address to authorize their charon client to take part in the DKG ceremony.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p/tree/master/p2p/security/noise). These keys need to be created (and backed up) by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This cluster definition specifies the intended cluster configuration before keys have been created in a distributed key generation ceremony. The `cluster-definition.json` file can be created with the help of the [Distributed Validator Launchpad](./cluster-configuration.md#using-the-dv-launchpad) or via the [CLI](./cluster-configuration.md#using-the-cli).
+
+## Carrying out the DKG ceremony
+
+Once all participants have signed the cluster definition, they can load the `cluster-definition` file into their charon client, and the client will attempt to complete the DKG.
+
+Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to relays that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen by a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+## Backing up the ceremony artifacts
+
+At the end of a DKG ceremony, each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favor of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it does not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../charon/cluster-configuration.md).
diff --git a/docs/versioned_docs/version-v0.19.1/charon/intro.md b/docs/versioned_docs/version-v0.19.1/charon/intro.md
new file mode 100644
index 0000000000..e53940651d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/charon/intro.md
@@ -0,0 +1,69 @@
+---
+sidebar_position: 1
+description: Charon - The Distributed Validator Client
+---
+
+# Introduction
+
+This section introduces and outlines the Charon _\[kharon]_ middleware, Obol's implementation of DVT. Please see the [key concepts](../int/key-concepts.md) section as background and context.
+
+## What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and its connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+
+
+## Charon Architecture
+
+Charon is an Ethereum proof of stake distributed validator (DV) client. Like any validator client, its main purpose is to perform validation duties for the Beacon Chain, primarily attestations and block proposals. The beacon client handles a lot of the heavy lifting, leaving the validator client to focus on fetching duty data, signing that data, and submitting it back to the beacon client.
+
+Charon is designed as a generic event-driven workflow with different components coordinating to perform validation duties. All duties follow the same flow, the only difference being the signed data. The workflow can be divided into phases consisting of one or more components:
+
+
+
+### Determine **when** duties need to be performed
+
+The beacon chain is divided into [slots](https://eth2book.info/bellatrix/part3/config/types/#slot) and [epochs](https://eth2book.info/bellatrix/part3/config/types/#epoch), which divides it into deterministically fixed-size time chunks. The first step is to determine when (which slot/epoch) duties need to be performed. This is done by the `scheduler` component. It queries the beacon node to detect which validators defined in the cluster lock are active, and what duties they need to perform for the upcoming epoch and slots. When such a slot starts, the `scheduler` emits an event indicating which validator needs to perform what duty.
+
+### Fetch and come to consensus on **what** data to sign
+
+A DV cluster consists of multiple operators each provided with one of the M-of-N threshold BLS private key shares per validator. The key shares are imported into the validator clients which produce partial signatures. Charon threshold aggregates these partial signatures before broadcasting them to the Beacon Chain. _But to threshold aggregate partial signatures, each validator must sign the same data._ The cluster must therefore coordinate and come to a consensus on what data to sign.
+
+`Fetcher` fetches the unsigned duty data from the beacon node upon receiving an event from `Scheduler`.\
+For attestations, this is the unsigned attestation, for block proposals, this is the unsigned block.
+
+The `Consensus` component listens to events from Fetcher and starts a [QBFT](https://docs.goquorum.consensys.net/configure-and-manage/configure/consensus-protocols/qbft/) consensus game with the other Charon nodes in the cluster for that specific duty and slot. When consensus is reached, the resulting unsigned duty data is stored in the `DutyDB`.
+
+### **Wait** for the VC to sign
+
+Charon is a **middleware** distributed validator client. That means Charon doesn’t have access to the validator private key shares and cannot sign anything on demand. Instead, operators import the key shares into industry-standard validator clients (VC) that are configured to connect to their local Charon client instead of their local Beacon node directly.
+
+Charon, therefore, serves the [Ethereum Beacon Node API](https://ethereum.github.io/beacon-APIs/#/) from the `ValidatorAPI` component and intercepts some endpoints while proxying other endpoints directly to the upstream Beacon node.
+
+The VC queries the `ValidatorAPI` for unsigned data which is retrieved from the `DutyDB`. It then signs it and submits it back to the `ValidatorAPI` which stores it in the `PartialSignatureDB`.
+
+### **Share** partial signatures
+
+The `PartialSignatureDB` stores the partially signed data submitted by the local Charon client’s VC. But it also stores all the partial signatures submitted by the VCs of other peers in the cluster. This is achieved by the `PartialSignatureExchange` component that exchanges partial signatures between all peers in the cluster. All charon clients, therefore, store all partial signatures the cluster generates.
+
+### **Threshold Aggregate** partial signatures
+
+The `SignatureAggregator` is invoked as soon as sufficient (any M of N) partial signatures are stored in the `PartialSignatureDB`. It performs BLS threshold aggregation of the partial signatures resulting in a final signature that is valid for the beacon chain.
+
+### **Broadcast** final signature
+
+Finally, the `Broadcaster` component broadcasts the final threshold aggregated signature to the Beacon client, thereby completing the duty.
+
+### Ports
+
+The following is an outline of the services that can be exposed by charon.
+
+* **:3600** - The validator REST API. This is the port that serves the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/). This is the port validator clients should talk to instead of their standard consensus client REST API port. Charon subsequently proxies these requests to the upstream consensus client specified by `--beacon-node-endpoints`.
+* **:3610** - Charon P2P port. This is the port that charon clients use to communicate with one another via TCP. This endpoint should be port-forwarded on your router and exposed publicly, preferably on a static IP address. This IP address should then be set on the charon run command with `--p2p-external-ip` or `CHARON_P2P_EXTERNAL_IP`.
+* **:3620** - Monitoring port. This port hosts a webserver that serves prometheus metrics on `/metrics`, a readiness endpoint on `/readyz` and a liveness endpoint on `/livez`, and a pprof server on `/debug/pprof`. This port should not be exposed publicly.
+
+## Getting started
+
+For more information on running charon, take a look at our [Quickstart Guides](../start/quickstart_overview.md).
diff --git a/docs/versioned_docs/version-v0.19.1/charon/networking.md b/docs/versioned_docs/version-v0.19.1/charon/networking.md
new file mode 100644
index 0000000000..076981a5c4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/charon/networking.md
@@ -0,0 +1,84 @@
+---
+sidebar_position: 4
+description: Networking
+---
+
+# Charon networking
+
+## Overview
+
+This document describes Charon's networking model which can be divided into two parts: the [_internal validator stack_](networking.md#internal-validator-stack) and the [_external p2p network_](networking.md#external-p2p-network).
+
+## Internal Validator Stack
+
+\
+
+
+Charon is a middleware DVT client and is therefore connected to an upstream beacon node and a downstream validator client is connected to it. Each operator should run the whole validator stack (all 4 client software types), either on the same machine or on different machines. The networking between the nodes should be private and not exposed to the public internet.
+
+Related Charon configuration flags:
+
+* `--beacon-node-endpoints`: Connects Charon to one or more beacon nodes.
+* `--validator-api-address`: Address for Charon to listen on and serve requests from the validator client.
+
+## External P2P Network
+
+ The Charon clients in a DV cluster are connected to each other via a small p2p network consisting of only the clients in the cluster. Peer IP addresses are discovered via an external "relay" server. The p2p connections are over the public internet so the charon p2p port must be publicly accessible. Charon leverages the popular [libp2p](https://libp2p.io/) protocol.
+
+Related [Charon configuration flags](charon-cli-reference.md):
+
+* `--p2p-tcp-addresses`: Addresses for Charon to listen on and serve p2p requests.
+* `--p2p-relays`: Connect charon to one or more relay servers.
+* `--private-key-file`: Private key identifying the charon client.
+
+### LibP2P Authentication and Security
+
+Each charon client has a secp256k1 private key. The associated public key is encoded into the [cluster lock file](cluster-configuration.md#Cluster-Lock-File) to identify the nodes in the cluster. For ease of use and to align with the Ethereum ecosystem, Charon encodes these public keys in the [ENR format](https://eips.ethereum.org/EIPS/eip-778), not in [libp2p’s Peer ID format](https://docs.libp2p.io/concepts/fundamentals/peers/).
+
+:::warning Each Charon node's secp256k1 private key is critical for authentication and must be kept secure to prevent cluster compromise.
+
+Do not use the same key across multiple clusters, as this can lead to security issues.
+
+For more on p2p security, refer to [libp2p's article](https://docs.libp2p.io/concepts/security/security-considerations). :::
+
+Charon currently only supports libp2p tcp connections with [noise](https://noiseprotocol.org/) security and only accepts incoming libp2p connections from peers defined in the cluster lock.
+
+### LibP2P Relays and Peer Discovery
+
+Relays are simple libp2p servers that are publicly accessible supporting the [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) protocol. Circuit-relay is a libp2p transport protocol that routes traffic between two peers over a third-party “relay” peer.
+
+Obol hosts a publicly accessible relay at https://0.relay.obol.tech and will work with other organisations in the community to host alternatives Anyone can host their own relay server for their DV cluster.
+
+Each charon node knows which peers are in the cluster from the ENRs in the cluster lock file, but their IP addresses are unknown. By connecting to the same relay, nodes establish “relay connections” to each other. Once connected via relay they exchange their known public addresses via libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol. The relay connection is then upgraded to a direct connection. If a node’s public IP changes, nodes once again connect via relay, exchange the new IP, and then connect directly once again.
+
+Note that in order for two peers to discover each other, they must connect to the same relay. Cluster operators should therefore coordinate which relays to use.
+
+Libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol attempts to automatically detect the public IP address of a charon client without the need to explicitly configure it. If this however fails, the following two configuration flags can be used to explicitly set the publicly advertised address:
+
+* `--p2p-external-ip`: Explicitly sets the external IP address.
+* `--p2p-external-hostname`: Explicitly sets the external DNS host name.
+
+:::warning If a pair of charon clients are not publicly accessible, due to being behind a NAT, they will not be able to upgrade their relay connections to a direct connection. Even though this is supported, it isn’t recommended as relay connections introduce additional latency and reduced throughput and will result in decreased validator effectiveness and possible missed block proposals and attestations. :::
+
+Libp2p’s circuit-relay connections are end-to-end encrypted, even though relay servers accept connections between nodes from multiple different clusters, relays are merely routing opaque connections. And since Charon only accepts incoming connections from other peers in its cluster, the use of a relay doesn’t allow connections between clusters.
+
+Only the following three libp2p protocols are established between a charon node and a relay itself:
+
+* [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/): To establish relay e2e encrypted connections between two peers in a cluster.\
+
+* [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify): Auto-detection of public IP addresses to share with other peers in the cluster.\
+
+* [peerinfo](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfo.go): Exchanges basic application [metadata](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfopb/v1/peerinfo.proto) for improved operational metrics and observability.\
+
+
+All other charon protocols are only established between nodes in the same cluster.
+
+### Scalable Relay Clusters
+
+In order for a charon client to connect to a relay, it needs the relay's [multiaddr](https://docs.libp2p.io/concepts/fundamentals/addressing/) (containing its public key and IP address). But a single multiaddr can only point to a single relay server which can easily be overloaded if too many clusters connect to it. Charon therefore supports resolving a relay’s multiaddr via HTTP GET request. Since charon also includes the unique `cluster-hash` header in this request, the relay provider can use [consistent header-based load-balancing](https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_steering_header-based_routing) to map clusters to one of many relays using a single HTTP address.
+
+The relay supports serving its runtime public multiaddrs via its `--http-address` flag.
+
+E.g., https://0.relay.obol.tech is actually a load-balancer that routes HTTP requests to one of many relays based on the `cluster-hash` header returning the target relay’s multiaddr which the charon client then uses to connect to that relay.
+
+The charon `--p2p-relays` flag therefore supports both multiaddrs as well as HTTP URls.
diff --git a/docs/versioned_docs/version-v0.19.1/dvl/README.md b/docs/versioned_docs/version-v0.19.1/dvl/README.md
new file mode 100644
index 0000000000..1b694a8473
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/dvl/README.md
@@ -0,0 +1,2 @@
+# dvl
+
diff --git a/docs/versioned_docs/version-v0.19.1/dvl/intro.md b/docs/versioned_docs/version-v0.19.1/dvl/intro.md
new file mode 100644
index 0000000000..99959aff23
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/dvl/intro.md
@@ -0,0 +1,27 @@
+---
+sidebar_position: 6
+description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# DV Launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: **The DV Launchpad**.
+
+## Getting started
+
+For more information on running charon in a UI friendly way through the DV Launchpad, take a look at our [Quickstart Guides](../start/quickstart_overview.md).
+
+## DV Launchpad Links
+
+| Ethereum Network | Launchpad |
+| ---------------- | ----------------------------------- |
+| Mainnet | https://beta.launchpad.obol.tech |
+| Holesky | https://holesky.launchpad.obol.tech |
+| Sepolia | https://sepolia.launchpad.obol.tech |
+| Goerli | https://goerli.launchpad.obol.tech |
diff --git a/docs/versioned_docs/version-v0.19.1/fr/README.md b/docs/versioned_docs/version-v0.19.1/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.19.1/fr/ethereum_and_dvt.md b/docs/versioned_docs/version-v0.19.1/fr/ethereum_and_dvt.md
new file mode 100644
index 0000000000..8e7857696c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/fr/ethereum_and_dvt.md
@@ -0,0 +1,54 @@
+---
+sidebar_position: 4
+description: Ethereum and its relationship with DVT
+---
+
+# Ethereum and its Relationship with DVT
+
+Our goal for this page is to equip you with the foundational knowledge needed to actively contribute to the advancement of Obol while also directing you to valuable Ethereum and DVT related resources. Additionally, we will shed light on the intersection of DVT and Ethereum, offering curated articles and blog posts to enhance your understanding.
+
+## **Understanding Ethereum**
+
+To grasp the current landscape of Ethereum's PoS development, we encourage you to delve into the wealth of information available on the [Official Ethereum Website.](https://ethereum.org/en/learn/) The Ethereum website serves as a hub for all things Ethereum, catering to individuals at various levels of expertise, whether you're just starting your journey or are an Ethereum veteran. Here, you'll find a trove of resources that cater to diverse learning needs and preferences, ensuring that there's something valuable for everyone in the Ethereum community to discover.
+
+## **DVT & Ethereum**
+
+### Distributed Validator Technology
+
+> "Distributed validator technology (DVT) is an approach to validator security that spreads out key management and signing responsibilities across multiple parties, to reduce single points of failure, and increase validator resiliency.
+>
+> It does this by splitting the private key used to secure a validator across many computers organized into a "cluster". The benefit of this is that it makes it very difficult for attackers to gain access to the key, because it is not stored in full on any single machine. It also allows for some nodes to go offline, as the necessary signing can be done by a subset of the machines in each cluster. This reduces single points of failure from the network and makes the whole validator set more robust." _(ethereum.org, 2023)_
+
+#### Learn More About Distributed Validator technology from [The Official Ethereum Website](https://ethereum.org/en/staking/dvt/)
+
+### How Does DVT Improve Staking on Ethereum?
+
+If you haven’t yet heard, Distributed Validator Technology, or DVT, is the next big thing on The Merge section of the Ethereum roadmap. Learn more about this in our blog post: [What is DVT and How Does It Improve Staking on Ethereum?](https://blog.obol.tech/what-is-dvt-and-how-does-it-improve-staking-on-ethereum/)
+
+
+
+_**Vitalik's Ethereum Roadmap**_
+
+### Deep Dive Into DVT and Charon’s Architecture
+
+Minimizing correlation is vital when designing DVT as Ethereum Proof of Stake is designed to heavily punish correlated behavior. In designing Obol, we’ve made careful choices to create a trust-minimized and non-correlated architecture.
+
+[**Read more about Designing Non-Correlation Here**](https://blog.obol.tech/deep-dive-into-dvt-and-charons-architecture/)
+
+### Performance Testing Distributed Validators
+
+In our mission to help make Ethereum consensus more resilient and decentralised with distributed validators (DVs), it’s critical that we do not compromise on the performance and effectiveness of validators. Earlier this year, we worked with MigaLabs, the blockchain ecosystem observatory located in Barcelona, to perform an independent test to validate the performance of Obol DVs under different configurations and conditions. After taking a few weeks to fully analyse the results together with MigaLabs, we’re happy to share the results of these performance tests.
+
+[**Read More About The Performance Test Results Here**](https://blog.obol.tech/performance-testing-distributed-validators/)
+
+
+
+### More Resources
+
+* [Sorting out Distributed Validator Technology](https://medium.com/nethermind-eth/sorting-out-distributed-validator-technology-a6f8ca1bbce3)
+* [A tour of Verifiable Secret Sharing schemes and Distributed Key Generation protocols](https://medium.com/nethermind-eth/a-tour-of-verifiable-secret-sharing-schemes-and-distributed-key-generation-protocols-3c814e0d47e1)
+* [Threshold Signature Schemes](https://medium.com/nethermind-eth/threshold-signature-schemes-36f40bc42aca)
+
+#### References
+
+* ethereum.org. (2023). Distributed Validator Technology. \[online] Available at: https://ethereum.org/en/staking/dvt/ \[Accessed 25 Sep. 2023].
diff --git a/docs/versioned_docs/version-v0.19.1/fr/testnet.md b/docs/versioned_docs/version-v0.19.1/fr/testnet.md
new file mode 100644
index 0000000000..d3c0ea558f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/fr/testnet.md
@@ -0,0 +1,122 @@
+---
+sidebar_position: 5
+description: Community testing efforts
+---
+
+# Community Testing
+
+:::tip
+
+This page looks at the community testing efforts organised by Obol to test Distributed Validators at scale. If you are looking for guides to run a Distributed Validator on testnet you can do so [here](../start/quickstart_overview.md).
+
+:::
+
+Over the last number of years, Obol Labs has coordinated and hosted a number of progressively larger testnets to help harden the Charon client and iterate on the key generation tooling.
+
+The following is a breakdown of the testnet roadmap, the features that were to be completed by each testnet, and their completion date and duration.
+
+# Testnets
+
+- [x] [Dev Net 1](#devnet-1)
+- [x] [Dev Net 2](#devnet-2)
+- [x] [Athena Public Testnet 1](#athena-public-testnet-1)
+- [x] [Bia Public Testnet 2](#bia-public-testnet-2)
+
+## Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using `docker compose` to spin up 4 Charon clients and 4 different validator clients on a single machine, with a remote consensus client. The keys were created locally in Charon and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+- A single user completes a first tutorial alone, using `docker compose` to spin up 4 Charon clients on a single machine, with a remote consensus client. The keys are created locally in Charon and activated with the existing launchpad.
+- Prove that the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works.
+- Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to Charon running across a network.
+
+## Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows **together** for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+
+This devnet was the first time `charon dkg` was tested with users. A core focus of this devnet was to collect network performance data.
+
+This was also the first time Charon was run in variable, non-virtual networks (i.e. the real internet).
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+- Groups of 4 testers complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients, each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+- Operators avoid exposing Charon to the public internet on a static IP address through the use of Obol-hosted relay nodes.
+- Users test `charon dkg`. The launchpad is not used, and this dkg is triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+- Effective collection of network performance data, to enable gathering even higher signal performance data at scale during public testnets.
+- Block proposals are in place.
+
+## Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 was to get distributed validators into the hands of the wider Obol Community for the first time. The core focus of this testnet was the onboarding experience.
+
+The core output from this testnet was a significant number of public cluster running and public feedback collected.
+
+This was an unincentivized testnet and formed the basis for us to figure out a Sybil resistance mechanism.
+
+**Participants:** Obol Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Completed date:** October 2022
+
+**Duration:** 2 weeks cluster setup, 8 weeks operation
+
+**Goals:**
+
+- Get distributed validators into the hands of the Obol Early Community for the first time.
+- Create the first public onboarding experience and gather feedback. This is the first time we need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) as possible.
+- Make deploying Ethereum validator nodes accessible using the CLI.
+- Generate a backlog of bugs, feature requests, platform requests and integration requests.
+
+## Bia Public Testnet 2
+
+This second public testnet intends to take the learning from Athena and scale the network by engaging both the wider at-home validator community and professional operators. This is the first time users are setting up DVs using the DV launchpad.
+
+This testnet is also important for learning the conditions Charon will be subjected to in production. A core output of this testnet is a large number of autonomous public DV clusters running and building up the Obol community with technical ambassadors.
+
+**Participants:** Obol Community, Ethereum staking community
+
+**State:** MVP
+
+**Network:** Görli
+
+**Completed date:** March 2023
+
+**Duration:** 2 weeks cluster setup, 4-8 weeks operation
+
+**Goals:**
+
+- Engage the wider Solo and Professional Ethereum Staking Community.
+- Get integration feedback.
+- Build confidence in Charon after running DVs on an Ethereum testnet.
+- Learn about the conditions Charon will be subjected to in production.
+- Distributed Validator returns are competitive versus single validator clients.
+- Make deploying Ethereum validator nodes accessible using the DV Launchpad.
+- Build comprehensive guides for various profiles to spin up DVs with minimal supervision from the core team.
diff --git a/docs/versioned_docs/version-v0.19.1/int/README.md b/docs/versioned_docs/version-v0.19.1/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.19.1/int/faq/README.md b/docs/versioned_docs/version-v0.19.1/int/faq/README.md
new file mode 100644
index 0000000000..456ad9139a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/int/faq/README.md
@@ -0,0 +1,2 @@
+# faq
+
diff --git a/docs/versioned_docs/version-v0.19.1/int/faq/dkg_failure.md b/docs/versioned_docs/version-v0.19.1/int/faq/dkg_failure.md
new file mode 100644
index 0000000000..33ffe9c496
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/int/faq/dkg_failure.md
@@ -0,0 +1,82 @@
+---
+sidebar_position: 4
+description: Handling DKG failure
+---
+
+# Handling DKG failure
+
+While the DKG process has been tested and validated against many different configuration instances, it can still encounter issues which might result in failure.
+
+Our DKG is designed in a way that doesn't allow for inconsistent results: either it finishes correctly for every peer, or it fails.
+
+This is a **safety** feature: you don't want to deposit an Ethereum distributed validator that not every operator is able to participate to, or even reach threshold for.
+
+The most common source of issues lies in the network stack: if any of the peer's Internet connection glitches substantially, the DKG will fail.
+
+Charon's DKG doesn't allow peer reconnection once the process is started, but it does allow for re-connections before that.
+
+When you see the following message:
+
+```
+14:08:34.505 INFO dkg Waiting to connect to all peers...
+```
+
+this means your Charon instance is waiting for all the other cluster peers to start their DKG process: at this stage, peers can disconnect and reconnect at will, the DKG process will still continue.
+
+A log line will confirm the connection of a new peer:
+
+```
+14:08:34.523 INFO dkg Connected to peer 1 of 3 {"peer": "fantastic-adult"}
+14:08:34.529 INFO dkg Connected to peer 2 of 3 {"peer": "crazy-bunch"}
+14:08:34.673 INFO dkg Connected to peer 3 of 3 {"peer": "considerate-park"}
+```
+
+As soon as all the peers are connected, this message will be shown:
+
+```
+14:08:34.924 INFO dkg All peers connected, starting DKG ceremony
+```
+
+Past this stage **no disconnections are allowed**, and _all peers must leave their terminals open_ in order for the DKG process to complete: this is a synchronous phase, and every peer is required in order to reach completion.
+
+If for some reason the DKG process fails, you would see error logs that resemble this:
+
+```
+14:28:46.691 ERRO cmd Fatal error: sync step: p2p connection failed, please retry DKG: context canceled
+```
+
+As the error message suggests, the DKG process needs to be retried.
+
+## Cleaning up the `.charon` directory
+
+One cannot simply retry the DKG process: Charon refuses to overwrite any runtime file in order to avoid inconsistencies and private key loss.
+
+When attempting to re-run a DKG with an unclean data directory -- which is either `.charon` or what was specified with the `--data-dir` CLI parameter -- this is the error that will be shown:
+
+```
+14:44:13.448 ERRO cmd Fatal error: data directory not clean, cannot continue {"disallowed_entity": "cluster-lock.json", "data-dir": "/compose/node0"}
+```
+
+The `disallowed_entity` field lists all the files that Charon refuses to overwrite, while `data-dir` is the full path of the runtime directory the DKG process is using.
+
+In order to retry the DKG process one must delete the following entities, if present:
+
+ - `validator_keys` directory
+ - `cluster-lock.json` file
+ - `deposit-data.json` file
+:::warning
+The `charon-enr-private-key` file **must be preserved**, failure to do so requires the DKG process to be restarted from the beginning by creating a new cluster definition.
+:::
+If you're doing a DKG with a custom cluster definition - for example, create with `charon create dkg` rather than the Obol Launchpad - you can re-use the same file.
+
+Once this process has been completed, the cluster operators can retry a DKG.
+
+## Further debugging
+
+If for some reason the DKG process fails again, node operators are adviced to reach out to the Obol team by opening an [issue](https://github.com/ObolNetwork/charon/issues), detailing what troubleshooting steps were taken and providing **debug logs**.
+
+To enable debug logs first clean up the Charon data directory as explained in [the previous paragraph](#cleaning-up-the-charon-directory), then run your DKG command by appending `--log-level=debug` at the end.
+
+In order for the Obol team to debug your issue as quickly and precisely as possible please provide full logs in textual form, not through screenshots or display photos.
+
+Providing complete logs is particularly important, since it allows the team to reconstruct precisely what happened.
diff --git a/docs/versioned_docs/version-v0.19.1/int/faq/general.md b/docs/versioned_docs/version-v0.19.1/int/faq/general.md
new file mode 100644
index 0000000000..fef689e31d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/int/faq/general.md
@@ -0,0 +1,108 @@
+---
+sidebar_position: 1
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+## General
+
+### Does Obol have a token?
+
+No. Distributed validators use only Ether.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/n6ebKsX46w) too.
+
+### Where does the name Charon come from?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) \[kharon] is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
+
+### What are the hardware requirements for running a Charon node?
+
+Charon alone uses negligible disk space of not more than a few MBs. However, if you are running your consensus client and execution client on the same server as charon, then you will typically need the same hardware as running a full Ethereum node:
+
+At minimum:
+
+* A CPU with 2+ physical cores (or 4 vCPUs)
+* 8GB RAM
+* 1.5TB+ free SSD disk space (for mainnet)
+* 10mb/s internet bandwidth
+
+Recommended specifications:
+
+* A CPU with 4+ physical cores
+* 16GB+ RAM
+* 2TB+ free disk on a high performance SSD (e.g. NVMe)
+* 25mb/s internet bandwidth
+
+For more hardware considerations, check out the [ethereum.org guides](https://ethereum.org/en/developers/docs/nodes-and-clients/run-a-node/#environment-and-hardware) which explores various setups and trade-offs, such as running the node locally or in the cloud.
+
+For now, Geth, Teku & Lighthouse clients are packaged within the docker compose file provided in the [quickstart guides](../../start/quickstart_overview.md), so you don't have to install anything else to run a cluster. Just make sure you give them some time to sync once you start running your node.
+
+### What is the difference between a node, a validator and a cluster?
+
+A node is a single instance of Ethereum EL+CL clients that can communicate with other nodes to maintain the Ethereum blockchain.
+
+A validator is a node that participates in the consensus process by verifying transactions and creating new blocks. Multiple validators can run from the same node.
+
+A cluster is a group of nodes that act together as one or several validators, which allows for a more efficient use of resources, reduces operational costs, and provides better reliability and fault tolerance.
+
+### Can I migrate an existing Charon node to a new machine?
+
+It is possible to migrate your Charon node to another machine running the same config by moving the `.charon` folder with its contents to your new machine. Make sure the EL and CL on the new machine are synced before proceeding to the move to minimize downtime.
+
+## Distributed Key Generation
+
+### What are the min and max numbers of operators for a Distributed Validator?
+
+Currently, the minimum is 4 operators with a threshold of 3.
+
+The threshold (aka quorum) corresponds to the minimum numbers of operators that need to be active for the validator(s) to be able to perform its duties. It is defined by the following formula `n-(ceil(n/3)-1)`. We strongly recommend using this default threshold in your DKG as it maximises liveness while maintaining BFT safety. Setting a 4 out of 4 cluster for example, would make your validator more vulnerable to going offline instead of less vulnerable. You can check the recommended threshold values for a cluster [here](../key-concepts.md#distributed-validator-threshold).
+
+## Obol Splits
+
+### What are Obol Splits?
+
+Obol Splits refers to a collection of composable smart contracts that enable the splitting of validator rewards and/or principal in a non-custodial, trust-minimised manner. Obol Splits contains integrations to enable DVs within Lido, Eigenlayer, and in future a number of other LSPs.
+
+### Are Obol Splits non-custodial?
+
+Yes. Unless you were to decide to [deploy an editable splitter contract](general.md#can-i-change-the-percentages-in-a-split), Obol Splits are immutable, non-upgradeable, non-custodial, and oracle-free.
+
+### Can I change the percentages in a split?
+
+Generally Obol Splits are deployed in an immutable fashion, meaning you cannot edit the percentages after deployment. However, if you were to choose to deploy a _controllable_ splitter contract when creating your Split, then yes, the address you select as controller can update the split percentages arbitrarily. A common pattern for this use case is to use a Gnosis SAFE as the controller address for the split, giving a group of entities (usually the operators and principal provider) the ability to update the percentages if need be. A well known example of this pattern is the [Protocol Guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html).
+
+### How do Obol Splits work?
+
+You can read more about how Obol Splits work [here](../../sc/introducing-obol-splits.md).
+
+### Are Obol Splits open source?
+
+Yes, Obol Splits are licensed under GPLv3 and the source code is available [here](https://github.com/ObolNetwork/obol-splits).
+
+### Are Obol Splits audited?
+
+The Obol Splits contracts have been audited, though further development has continued on the contracts since. Consult the audit results [here](../../sec/smart_contract_audit.md).
+
+### Are the Obol Splits contracts verified on Etherscan?
+
+Yes, you can view the verified contracts on Etherscan. A list of the contract deployments can be found [here](https://github.com/ObolNetwork/obol-splits?#deployment).
+
+### Does my cold wallet have to call the Obol Splits contracts?
+
+No. Any address can trigger the contracts to move the funds, they do not need to be a member of the Split either. You can set your cold wallet/custodian address as the recipient of the principal and rewards, and use any hot wallet to pay the gas fees to push the ether into the recipient address.
+
+### Are there any edge cases I should be aware of when using Obol Splits?
+
+The most important decision is to be aware of whether or not the Split contract you are using has been set up with editability. If a splitter is editable, you should understand what the address that can edit the split does. Is the editor an EOA? Who controls that address? How secure is their seed phrase? Is it a smart contract? What can that contract do? Can the controller contract be upgraded? etc. Generally, the safest thing in Obol's perspective is not to have an editable splitter, and if in future you are unhappy with the configuration, that you exit the validator and create a fresh cluster with new settings that fit your needs.
+
+Another aspect to be aware of is how the splitting of principal from rewards works using the Optimistic Withdrawal Recipient contract. There are edge cases relating to not calling the contracts periodically or ahead of a withdrawal, activating more validators than the contract was configured for, and a worst case mass slashing on the network. Consult the documentation on the contract [here](../../sc/introducing-obol-splits.md#optimistic-withdrawal-recipient), its audit [here](../../sec/smart_contract_audit.md), and follow up with the core team if you have further questions.
+
+## Debugging Errors in Logs
+
+You can check if the containers on your node are outputting errors by running `docker compose logs` on a machine with a running cluster.
+
+Diagnose some common errors and view their resolutions [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/int/faq/errors.mdx).
diff --git a/docs/versioned_docs/version-v0.19.1/int/faq/risks.md b/docs/versioned_docs/version-v0.19.1/int/faq/risks.md
new file mode 100644
index 0000000000..bfc1e2980c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/int/faq/risks.md
@@ -0,0 +1,40 @@
+---
+sidebar_position: 3
+description: Centralization Risks and mitigation
+---
+
+# Centralization risks and mitigation
+
+## Risk: Obol hosting the relay infrastructure
+**Mitigation**: Self-host a relay
+
+One of the risks associated with Obol hosting the [LibP2P relays](../../charon/networking.md) infrastructure allowing peer discovery is that if Obol-hosted relays go down, peers won't be able to discover each other and perform the DKG. To mitigate this risk, external organizations and node operators can consider self-hosting a relay. This way, if Obol's relays go down, the clusters can still operate through other relays in the network. Ensure that all nodes in the cluster use the same relays, or they will not be able to find each other if they are connected to different relays.
+
+The following non-Obol entities run relays that you can consider adding to your cluster (you can have more than one per cluster, see the `--p2p-relays` flag of [`charon run`](../../charon/charon-cli-reference.md#the-run-command)):
+
+| Entity | Relay URL |
+|-----------|---------------------------------------|
+| [DSRV](https://www.dsrvlabs.com/) | https://charon-relay.dsrvlabs.dev |
+| [Infstones](https://infstones.com/) | https://obol-relay.infstones.com:3640/ |
+| [Hashquark](https://www.hashquark.io/) | https://relay-2.prod-relay.721.land/ |
+| [Figment](https://figment.io/) | https://relay-1.obol.figment.io/ |
+
+## Risk: Obol being able to update Charon code
+**Mitigation**: Pin specific docker versions or compile from source on a trusted commit
+
+Another risk associated with Obol is having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) running on the network which could introduce vulnerabilities or malicious code. To mitigate this risk, operators can consider pinning specific versions of the code that have been thoroughly tested and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community.
+
+## Risk: Obol hosting the DV Launchpad
+**Mitigation**: Use [`create cluster`](../../charon/charon-cli-reference.md) or [`create dkg`](../../charon/charon-cli-reference.md) locally and distribute the files manually
+
+Hosting the first Charon frontend, the [DV Launchpad](../../dvl/intro.md), on a centralized server could create a single point of failure, as users would have to rely on Obol's server to access the protocol. This could limit the decentralization of the protocol and could make it vulnerable to attacks or downtime. Obol hosting the launchpad on a decentralized network, such as IPFS is a first step but not enough. This is why the Charon code is open-source and contains a CLI interface to interact with the protocol locally.
+
+To mitigate the risk of launchpad failure, consider using the `create cluster` or `create dkg` commands locally and distributing the key shares files manually.
+
+
+## Risk: Obol going bust/rogue
+**Mitigation**: Use key recovery
+
+The final centralization risk associated with Obol is the possibility of the company going bankrupt or acting maliciously, which would lead to a loss of control over the network and potentially cause damage to the ecosystem. To mitigate this risk, Obol has implemented a key recovery mechanism. This would allow the clusters to continue operating and to retrieve full private keys even if Obol is no longer able to provide support.
+
+A guide to recombine key shares into a single private key can be accessed [here](../../advanced/quickstart-combine.md).
diff --git a/docs/versioned_docs/version-v0.19.1/int/key-concepts.md b/docs/versioned_docs/version-v0.19.1/int/key-concepts.md
new file mode 100644
index 0000000000..7e96175216
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/int/key-concepts.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 2
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is possible with the use of **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes some of the single points of failure in validation. Should <33% of the participating nodes in a DV cluster go offline, the remaining active nodes can still come to consensus on what to sign and can produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes Geth, Lighthouse, Charon and Teku.
+
+### Execution Client
+
+
+
+An execution client (formerly known as an Eth1 client) specializes in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/charon/intro/README.md).
+
+### Validator Client
+
+
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Threshold
+
+The number of nodes in a cluster that needs to be online and honest for their distributed validators to be online is outlined in the following table.
+
+| Cluster Size | Threshold | Note |
+| :----------: | :-------: | --------------------------------------------- |
+| 4 | 3/4 | Minimum threshold |
+| 5 | 4/5 | |
+| 6 | 4/6 | Minimum to tolerate two offline nodes |
+| 7 | 5/7 | Minimum to tolerate two **malicious** nodes |
+| 8 | 6/8 | |
+| 9 | 6/9 | Minimum to tolerate three offline nodes |
+| 10 | 7/10 | Minimum to tolerate three **malicious** nodes |
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata. Read more about these ceremonies [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/charon/dkg/README.md).
diff --git a/docs/versioned_docs/version-v0.19.1/int/overview.md b/docs/versioned_docs/version-v0.19.1/int/overview.md
new file mode 100644
index 0000000000..0fad2b0a79
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/int/overview.md
@@ -0,0 +1,55 @@
+---
+sidebar_position: 1
+description: An overview of the Obol network
+---
+
+# Overview of Obol
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 35 members that are spread across the world.
+
+The core team is building the Distributed Validator Protocol, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mitigate these risks, community building and credible neutrality must be used as primary design principles.
+
+Obol is focused on scaling consensus by providing permissionless access to Distributed Validators (DVs). We believe that distributed validators will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption, the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing infrastructure.
+
+Similar to how roll-up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking middlewares that can be adopted at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvl/intro.md), a user interface for bootstrapping Distributed Validators
+* [Charon](../charon/intro.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Splits](../sc/introducing-obol-splits.md), a set of solidity smart contracts for the distribution of rewards from Distributed Validators
+* [Obol Testnets](../fr/testnet.md), distributed validator infrastructure for Ethereum public test networks, to enable any sized operator to test their deployment before running Distributed Validators on mainnet.
+
+### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+## The Vision
+
+The road to decentralizing stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+### V1 - Trusted Distributed Validators
+
+
+
+The first version of distributed validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivization is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivization scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivization alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivization layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the delineation between the two.
diff --git a/docs/versioned_docs/version-v0.19.1/sc/README.md b/docs/versioned_docs/version-v0.19.1/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.19.1/sc/introducing-obol-splits.md b/docs/versioned_docs/version-v0.19.1/sc/introducing-obol-splits.md
new file mode 100644
index 0000000000..2f14d01024
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sc/introducing-obol-splits.md
@@ -0,0 +1,89 @@
+---
+sidebar_position: 1
+description: Smart contracts for managing Distributed Validators
+---
+
+# Obol Splits
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators. These contracts include:
+
+- Withdrawal Recipients: Contracts used for a validator's withdrawal address.
+- Split contracts: Contracts to split ether across multiple entities. Developed by [Splits.org](https://splits.org)
+- Split controllers: Contracts that can mutate a splitter's configuration.
+
+Two key goals of validator reward management are:
+
+1. To be able to differentiate reward ether from principal ether such that node operators can be paid a percentage the _reward_ they accrue for the principal provider rather than a percentage of _principal+reward_.
+2. To be able to withdraw the rewards in an ongoing manner without exiting the validator.
+
+Without access to the consensus layer state in the EVM to check a validator's status or balance, and due to the incoming ether being from an irregular state transition, neither of these requirements are easily satisfiable.
+
+The following sections outline different contracts that can be composed to form a solution for one or both goals.
+
+## Withdrawal Recipients
+
+Validators have two streams of revenue, the consensus layer rewards and the execution layer rewards. Withdrawal Recipients focus on the former, receiving the balance skimming from a validator with >32 ether in an ongoing manner, and receiving the principal of the validator upon exit.
+
+### Optimistic Withdrawal Recipient
+
+This is the primary withdrawal recipient Obol uses, as it allows for the separation of reward from principal, as well as permitting the ongoing withdrawal of accruing rewards.
+
+An Optimistic Withdrawal Recipient [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipient.sol) takes three inputs when deployed:
+
+- A _principal_ address: The address that controls where the principal ether will be transferred post-exit.
+- A _reward_ address: The address where the accruing reward ether is transferred to.
+- The amount of ether that makes up the principal.
+
+This contract **assumes that any ether that has appeared in it's address since it was last able to do balance accounting is skimming reward from an ongoing validator** (or number of validators) unless the change is > 16 ether. This means balance skimming is immediately claimable as reward, while an inflow of e.g. 31 ether is tracked as a return of principal (despite being slashed in this example).
+
+:::warning
+
+Worst-case mass slashings can theoretically exceed 16 ether, if this were to occur, the returned principal would be misclassified as a reward, and distributed to the wrong address. This risk is the drawback that makes this contract variant 'optimistic'. If you intend to use this contract type, **it is important you understand and accept this risk**, however minute.
+
+The alternative is to use an splits.org [waterfall contract](https://docs.splits.org/core/waterfall), which won't allow the claiming of rewards until all principal ether has been returned, meaning validators need to be exited for operators to claim their CL rewards.
+
+:::
+
+This contract fits both design goals and can be used with thousands of validators. It is safe to deploy an Optimistic Withdrawal Recipient with a principal higher than you actually end up using, though you should process the accrued rewards before exiting a validator or the reward recipients will be short-changed as that balance may be counted as principal instead of reward the next time the contract is updated. If you activate more validators than you specified in your contract deployment, you will record too much ether as reward and will overpay your reward address with ether that was principal ether, not earned ether. Current iterations of this contract are not designed for editing the amount of principal set.
+
+#### OWR Factory Deployment
+
+The OptimisticWithdrawalRecipient contract is deployed via a [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipientFactory.sol). The factory is deployed at the following addresses on the following chains.
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | [0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522](https://etherscan.io/address/0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522) |
+| Goerli | [0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26](https://goerli.etherscan.io/address/0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26) |
+| Holesky | |
+| Sepolia | [0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a](https://sepolia.etherscan.io/address/0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a) |
+
+### Exitable Withdrawal Recipient
+
+A much awaited feature for proof of stake Ethereum is the ability to trigger the exit of a validator with only the withdrawal address. This is tracked in [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002). Support for this feature will be inheritable in all other withdrawal recipient contracts. This will mitigate the risk to a principal provider of funds being stuck, or a validator being irrecoverably offline.
+
+## Split Contracts
+
+A split, or splitter, is a set of contracts that can divide ether or an ERC20 across a number of addresses. Splits are often used in conjunction with withdrawal recipients. Execution Layer rewards for a DV are directed to a split address through the use of a `fee recipient` address. Splits can be either immutable, or mutable by way of an admin address capable of updating them.
+
+Further information about splits can be found on the splits.org team's [docs site](https://docs.splits.org/). The addresses of their deployments can be found [here](https://docs.splits.org/core/split#addresses).
+
+## Split Controllers
+
+Splits can be completely edited through the use of the `controller` address, however, total editability of a split is not always wanted. A permissive controller and a restrictive controller are given as examples below.
+
+### (Gnosis) SAFE wallet
+
+A [SAFE](https://safe.global/) is a common method to administrate a mutable split. The most well-known deployment of this pattern is the [protocol guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html). The SAFE can arbitrarily update the split to any set of addresses with any valid set of percentages.
+
+### Immutable Split Controller
+
+This is a [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitController.sol) that updates one split configuration with another, exactly once. Only a permissioned address can trigger the change. This contract is suitable for changing a split at an unknown point in future to a configuration pre-defined at deployment.
+
+The Immutable Split Controller [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitControllerFactory.sol) can be found at the following addresses:
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | |
+| Goerli | [0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f](https://goerli.etherscan.io/address/0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f) |
+| Holesky | |
+| Sepolia | |
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/README.md b/docs/versioned_docs/version-v0.19.1/sdk/README.md
new file mode 100644
index 0000000000..1bcfa0dc3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/README.md
@@ -0,0 +1,2 @@
+# sdk
+
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/classes/README.md b/docs/versioned_docs/version-v0.19.1/sdk/classes/README.md
new file mode 100644
index 0000000000..46d80f843a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/classes/README.md
@@ -0,0 +1,2 @@
+# classes
+
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/classes/client.md b/docs/versioned_docs/version-v0.19.1/sdk/classes/client.md
new file mode 100644
index 0000000000..d24a05f143
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/classes/client.md
@@ -0,0 +1,201 @@
+---
+sidebar_position: 6
+sidebar_label: Client
+description: The client object of the Obol SDK
+---
+
+# Client
+
+Obol SDK `Client` can be used for creating, managing and activating distributed validators.
+
+### Extends
+
+* `Base`
+
+### Constructors
+
+#### new Client(config, signer)
+
+> **new Client**(`config`, `signer`?): [`Client`](client.md)
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ----------------- | -------- | --------------- |
+| `config` | `Object` | |
+| `config.baseUrl`? | `string` | - |
+| `config.chainId`? | `number` | - |
+| `signer`? | `Signer` | ethersJS Signer |
+
+**Returns**
+
+[`Client`](client.md)
+
+Obol-SDK Client instance
+
+An example of how to instantiate obol-sdk Client: [obolClient](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L29)
+
+**Overrides**
+
+`Base.constructor`
+
+**Source**
+
+index.ts:27
+
+### Properties
+
+| Property | Modifier | Type | Inherited from |
+| -------------- | --------- | ----------------------- | ------------------- |
+| `baseUrl` | `public` | `string` | `Base.baseUrl` |
+| `chainId` | `public` | `number` | `Base.chainId` |
+| `fork_version` | `public` | `string` | `Base.fork_version` |
+| `signer` | `private` | `undefined` \| `Signer` | - |
+
+### Methods
+
+#### createClusterDefinition()
+
+> **createClusterDefinition**(`newCluster`): `Promise`< `string` >
+
+Creates a cluster definition which contains cluster configuration.
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ------------ | --------------------------------------------------- | ----------------------- |
+| `newCluster` | [`ClusterPayload`](../interfaces/clusterpayload.md) | The new unique cluster. |
+
+**Returns**
+
+`Promise`< `string` >
+
+config\_hash.
+
+**Throws**
+
+On duplicate entries, missing or wrong cluster keys.
+
+An example of how to use createClusterDefinition: [createObolCluster](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+**Source**
+
+index.ts:42
+
+***
+
+#### getClusterDefinition()
+
+> **getClusterDefinition**(`configHash`): `Promise`< [`ClusterDefintion`](../interfaces/clusterdefintion.md) >
+
+**Parameters**
+
+| Parameter | Type |
+| ------------ | -------- |
+| `configHash` | `string` |
+
+**Returns**
+
+`Promise`< [`ClusterDefintion`](../interfaces/clusterdefintion.md) >
+
+The cluster definition for config hash
+
+**Throws**
+
+On not found config hash.
+
+An example of how to use getClusterDefinition: [getObolClusterDefinition](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+**Source**
+
+index.ts:132
+
+***
+
+#### getClusterLock()
+
+> **getClusterLock**(`configHash`): `Promise`< [`ClusterLock`](../interfaces/clusterlock.md) >
+
+**Parameters**
+
+| Parameter | Type |
+| ------------ | -------- |
+| `configHash` | `string` |
+
+**Returns**
+
+`Promise`< [`ClusterLock`](../interfaces/clusterlock.md) >
+
+The matched cluster details (lock) from DB
+
+**Throws**
+
+On not found cluster definition or lock.
+
+An example of how to use getClusterLock: [getObolClusterLock](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+**Source**
+
+index.ts:148
+
+***
+
+#### request()
+
+> **`protected`** **request**< `T`>(`endpoint`, `options`?): `Promise`< `T` >
+
+**Type parameters**
+
+| Type parameter |
+| -------------- |
+| `T` |
+
+**Parameters**
+
+| Parameter | Type |
+| ---------- | ------------- |
+| `endpoint` | `string` |
+| `options`? | `RequestInit` |
+
+**Returns**
+
+`Promise`< `T` >
+
+**Inherited from**
+
+`Base.request`
+
+**Source**
+
+base.ts:23
+
+***
+
+#### updateClusterDefinition()
+
+> **updateClusterDefinition**(`operatorPayload`, `configHash`): `Promise`< [`ClusterDefintion`](../interfaces/clusterdefintion.md) >
+
+Approves joining a cluster with specific configuration.
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ----------------- | ------------------------------------------------------- | ---------------------------------------------------------------------- |
+| `operatorPayload` | [`OperatorPayload`](../type-aliases/operatorpayload.md) | The operator data including signatures. |
+| `configHash` | `string` | The config hash of the cluster which the operator confirms joining to. |
+
+**Returns**
+
+`Promise`< [`ClusterDefintion`](../interfaces/clusterdefintion.md) >
+
+The cluster definition.
+
+**Throws**
+
+On unauthorized, duplicate entries, missing keys, not found cluster or invalid data.
+
+An example of how to use updateClusterDefinition: [updateClusterDefinition](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+**Source**
+
+index.ts:93
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/enumerations/README.md b/docs/versioned_docs/version-v0.19.1/sdk/enumerations/README.md
new file mode 100644
index 0000000000..ec74a1ba13
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/enumerations/README.md
@@ -0,0 +1,2 @@
+# enumerations
+
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/enumerations/fork_mapping.md b/docs/versioned_docs/version-v0.19.1/sdk/enumerations/fork_mapping.md
new file mode 100644
index 0000000000..0cb899ceb4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/enumerations/fork_mapping.md
@@ -0,0 +1,12 @@
+# FORK\_MAPPING
+
+Permitted `chainId`s for the [Client](../classes/client.md) `config` constructor parameter.
+
+### Enumeration Members
+
+| Enumeration Member | Value | Description |
+| ------------------ | ------- | -------------- |
+| `0x00000000` | `1` | Mainnet. |
+| `0x00000064` | `100` | Gnosis Chain. |
+| `0x00001020` | `5` | Goerli/Prater. |
+| `0x01017000` | `17000` | Holesky. |
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/index.md b/docs/versioned_docs/version-v0.19.1/sdk/index.md
new file mode 100644
index 0000000000..528f0700d2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/index.md
@@ -0,0 +1,41 @@
+---
+sidebar_position: 1
+title: Obol SDK Reference
+sidebar_label: Intro
+description: Obol SDK
+---
+
+# Obol SDK Reference
+
+
+
+This is the reference for the Obol Software Development Kit, for creating Distributed Validators with the help of the [Obol API](https://github.com/ObolNetwork/obol-docs/blob/main/api/README.md).
+
+### Getting Started
+
+Checkout our [docs](https://docs.obol.tech/docs/int/quickstart/advanced/quickstart-sdk), [examples](https://github.com/ObolNetwork/obol-sdk-examples/), and SDK [reference](https://obolnetwork.github.io/obol-packages). Further guides and walkthroughs coming soon.
+
+### Enumerations
+
+* [FORK\_MAPPING](enumerations/fork_mapping.md)
+
+### Classes
+
+* [Client](classes/client.md)
+
+### Interfaces
+
+* [ClusterDefintion](interfaces/clusterdefintion.md)
+* [ClusterLock](interfaces/clusterlock.md)
+* [ClusterPayload](interfaces/clusterpayload.md)
+
+### Type Aliases
+
+* [BuilderRegistration](type-aliases/builderregistration.md)
+* [BuilderRegistrationMessage](type-aliases/builderregistrationmessage.md)
+* [ClusterCreator](type-aliases/clustercreator.md)
+* [ClusterOperator](type-aliases/clusteroperator.md)
+* [ClusterValidator](type-aliases/clustervalidator.md)
+* [DepositData](type-aliases/depositdata.md)
+* [DistributedValidator](type-aliases/distributedvalidator.md)
+* [OperatorPayload](type-aliases/operatorpayload.md)
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/interfaces/README.md b/docs/versioned_docs/version-v0.19.1/sdk/interfaces/README.md
new file mode 100644
index 0000000000..95109455d3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/interfaces/README.md
@@ -0,0 +1,2 @@
+# interfaces
+
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterdefintion.md b/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterdefintion.md
new file mode 100644
index 0000000000..7b027064fa
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterdefintion.md
@@ -0,0 +1,25 @@
+# ClusterDefintion
+
+Cluster Definition
+
+### Extends
+
+* [`ClusterPayload`](clusterpayload.md)
+
+### Properties
+
+| Property | Type | Description | Inherited from |
+| ------------------ | ------------------------------------------------------------ | ---------------------------------------------------- | -------------------------------------------------- |
+| `config_hash` | `string` | The cluster configuration hash. | - |
+| `creator` | [`ClusterCreator`](../type-aliases/clustercreator.md) | The creator of the cluster. | - |
+| `definition_hash?` | `string` | The hash of the cluster definition. | - |
+| `dkg_algorithm` | `string` | The cluster dkg algorithm. | - |
+| `fork_version` | `string` | The cluster fork version. | - |
+| `name` | `string` | The cluster name. | [`ClusterPayload`](clusterpayload.md).`name` |
+| `num_validators` | `number` | The number of distributed validators in the cluster. | - |
+| `operators` | [`ClusterOperator`](../type-aliases/clusteroperator.md)\[] | The cluster nodes operators addresses. | [`ClusterPayload`](clusterpayload.md).`operators` |
+| `threshold` | `number` | The distributed validator threshold. | - |
+| `timestamp` | `string` | The cluster creation timestamp. | - |
+| `uuid` | `string` | The cluster uuid. | - |
+| `validators` | [`ClusterValidator`](../type-aliases/clustervalidator.md)\[] | The clusters validators information. | [`ClusterPayload`](clusterpayload.md).`validators` |
+| `version` | `string` | The cluster configuration version. | - |
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterlock.md b/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterlock.md
new file mode 100644
index 0000000000..5dcfc6c3d5
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterlock.md
@@ -0,0 +1,13 @@
+# ClusterLock
+
+Cluster Lock (Cluster Details after DKG is complete)
+
+### Properties
+
+| Property | Type | Description |
+| ------------------------ | -------------------------------------------------------------------- | ----------------------------------------------------------- |
+| `cluster_definition` | [`ClusterDefintion`](clusterdefintion.md) | The cluster definition. |
+| `distributed_validators` | [`DistributedValidator`](../type-aliases/distributedvalidator.md)\[] | The cluster distributed validators. |
+| `lock_hash` | `string` | The hash of the cluster lock. |
+| `node_signatures` | `string`\[] | Node Signature for the lock hash by the node secp256k1 key. |
+| `signature_aggregate` | `string` | The cluster bls signature aggregate. |
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterpayload.md b/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterpayload.md
new file mode 100644
index 0000000000..2444d6d33c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/interfaces/clusterpayload.md
@@ -0,0 +1,15 @@
+# ClusterPayload
+
+Cluster Required Configuration
+
+### Extended by
+
+* [`ClusterDefintion`](clusterdefintion.md)
+
+### Properties
+
+| Property | Type | Description |
+| ------------ | ------------------------------------------------------------ | -------------------------------------- |
+| `name` | `string` | The cluster name. |
+| `operators` | [`ClusterOperator`](../type-aliases/clusteroperator.md)\[] | The cluster nodes operators addresses. |
+| `validators` | [`ClusterValidator`](../type-aliases/clustervalidator.md)\[] | The clusters validators information. |
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/README.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/README.md
new file mode 100644
index 0000000000..ef07201c1b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/README.md
@@ -0,0 +1,2 @@
+# type-aliases
+
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/builderregistration.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/builderregistration.md
new file mode 100644
index 0000000000..5451854308
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/builderregistration.md
@@ -0,0 +1,16 @@
+# BuilderRegistration
+
+> **BuilderRegistration**: `Object`
+
+Pre-generated Signed Validator Builder Registration
+
+### Type declaration
+
+| Member | Type | Description |
+| ----------- | ------------------------------------------------------------- | -------------------------------------------------- |
+| `message` | [`BuilderRegistrationMessage`](builderregistrationmessage.md) | Builder registration message. |
+| `signature` | `string` | BLS signature of the builder registration message. |
+
+### Source
+
+types.ts:143
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/builderregistrationmessage.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/builderregistrationmessage.md
new file mode 100644
index 0000000000..088c590b06
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/builderregistrationmessage.md
@@ -0,0 +1,16 @@
+> **BuilderRegistrationMessage**: `Object`
+
+Unsigned DV Builder Registration Message
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `fee_recipient` | `string` | The DV fee recipient. |
+| `gas_limit` | `number` | Default is 30000000. |
+| `pubkey` | `string` | The public key of the DV. |
+| `timestamp` | `number` | Timestamp when generating cluster lock file. |
+
+## Source
+
+types.ts:125
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clustercreator.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clustercreator.md
new file mode 100644
index 0000000000..5684c5f8e4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clustercreator.md
@@ -0,0 +1,14 @@
+> **ClusterCreator**: `Object`
+
+Cluster Creator
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `address` | `string` | The creator address. |
+| `config_signature` | `string` | The cluster configuration signature. |
+
+## Source
+
+types.ts:51
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clusteroperator.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clusteroperator.md
new file mode 100644
index 0000000000..1182bb1289
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clusteroperator.md
@@ -0,0 +1,18 @@
+> **ClusterOperator**: `Object`
+
+Cluster Node Operator
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `address` | `string` | The operator address. |
+| `config_signature` | `string` | The operator configuration signature. |
+| `enr` | `string` | The operator ethereum node record. |
+| `enr_signature` | `string` | The operator enr signature. |
+| `fork_version` | `string` | The cluster fork_version. |
+| `version` | `string` | The cluster version. |
+
+## Source
+
+types.ts:22
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clustervalidator.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clustervalidator.md
new file mode 100644
index 0000000000..9a939f39e3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/clustervalidator.md
@@ -0,0 +1,14 @@
+> **ClusterValidator**: `Object`
+
+Cluster Validator
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `fee_recipient_address` | `string` | The validator fee recipient address. |
+| `withdrawal_address` | `string` | The validator reward address. |
+
+## Source
+
+types.ts:62
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/depositdata.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/depositdata.md
new file mode 100644
index 0000000000..382eaab253
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/depositdata.md
@@ -0,0 +1,17 @@
+> **DepositData**: `Object`
+
+Deposit Data
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `amount` | `string` | 32 ethers. |
+| `deposit_data_root` | `string` | A checksum for DepositData fields . |
+| `pubkey` | `string` | The public key of the distributed validator. |
+| `signature` | `string` | BLS signature of the deposit message. |
+| `withdrawal_credentials` | `string` | The 0x01 withdrawal address of the DV. |
+
+## Source
+
+types.ts:155
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/distributedvalidator.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/distributedvalidator.md
new file mode 100644
index 0000000000..da90f7b3fc
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/distributedvalidator.md
@@ -0,0 +1,18 @@
+# DistributedValidator
+
+> **DistributedValidator**: `Object`
+
+Distributed Validator
+
+### Type declaration
+
+| Member | Type | Description |
+| ------------------------ | ----------------------------------------------- | ---------------------------------------------------------------------------------- |
+| `builder_registration` | [`BuilderRegistration`](builderregistration.md) | pre-generated signed validator builder registration to be sent to builder network. |
+| `deposit_data` | `Partial`< [`DepositData`](depositdata.md) > | The required deposit data for activating the DV. |
+| `distributed_public_key` | `string` | The public key of the distributed validator. |
+| `public_shares` | `string`\[] | The public key of the node distributed validator share. |
+
+### Source
+
+types.ts:176
diff --git a/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/operatorpayload.md b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/operatorpayload.md
new file mode 100644
index 0000000000..b922105da2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sdk/type-aliases/operatorpayload.md
@@ -0,0 +1,9 @@
+# OperatorPayload
+
+> **OperatorPayload**: `Partial`< [`ClusterOperator`](clusteroperator.md) > & `Required`< `Pick`< [`ClusterOperator`](clusteroperator.md), `"enr"` | `"version"` > >
+
+A partial view of `ClusterOperator` with `enr` and `version` as required properties.
+
+### Source
+
+types.ts:46
diff --git a/docs/versioned_docs/version-v0.19.1/sec/README.md b/docs/versioned_docs/version-v0.19.1/sec/README.md
new file mode 100644
index 0000000000..aeb3b02cce
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sec/README.md
@@ -0,0 +1,2 @@
+# sec
+
diff --git a/docs/versioned_docs/version-v0.19.1/sec/bug-bounty.md b/docs/versioned_docs/version-v0.19.1/sec/bug-bounty.md
new file mode 100644
index 0000000000..48c52d89b4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sec/bug-bounty.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 2
+description: Bug Bounty Policy
+---
+
+# Obol Bug Bounty
+
+## Overview
+
+Obol Labs is committed to ensuring the security of our distributed validator software and services. As part of our commitment to security, we have established a bug bounty program to encourage security researchers to report vulnerabilities in our software and services to us so that we can quickly address them.
+
+## Eligibility
+
+To participate in the Bug Bounty Program you must:
+
+- Not be a resident of any country that does not allow participation in these types of programs
+- Be at least 14 years old and have legal capacity to agree to these terms and participate in the Bug Bounty Program
+- Have permission from your employer to participate
+- Not be (for the previous 12 months) an Obol Labs employee, immediate family member of an Obol employee, Obol contractor, or Obol service provider.
+
+## Scope
+
+The bug bounty program applies to software and services that are built by Obol. Only submissions under the following domains are eligible for rewards:
+
+- Charon DVT Middleware
+- DV Launchpad
+- Obol’s Public API
+- Obol’s Smart Contracts and the contracts they depend on.
+- Obol’s Public Relay
+
+Additionally, all vulnerabilities that require or are related to the following are out of scope:
+
+- Social engineering
+- Rate Limiting (Non-critical issues)
+- Physical security
+- Non-security-impacting UX issues
+- Vulnerabilities or weaknesses in third party applications that integrate with Obol
+- The Obol website or the Obol infrastructure in general is NOT part of this bug bounty program.
+
+## Rules
+
+- Bug has not been publicly disclosed
+- Vulnerabilities that have been previously submitted by another contributor or already known by the Obol development team are not eligible for rewards
+- The size of the bounty payout depends on the assessment of the severity of the exploit. Please refer to the rewards section below for additional details
+- Bugs must be reproducible in order for us to verify the vulnerability. Submissions with a working proof of concept is necessary
+- Rewards and the validity of bugs are determined by the Obol security team and any payouts are made at their sole discretion
+- Terms and conditions of the Bug Bounty program can be changed at any time at the discretion of Obol
+- Details of any valid bugs may be shared with complementary protocols utilised in the Obol ecosystem in order to promote ecosystem cohesion and safety.
+
+## Rewards
+
+The rewards for participating in our bug bounty program will be based on the severity and impact of the vulnerability discovered. We will evaluate each submission on a case-by-case basis, and the rewards will be at Obol’s sole discretion.
+
+### Low: up to $500
+
+A Low-level vulnerability is one that has a limited impact and can be easily fixed. Unlikely to have a meaningful impact on availability, integrity, and/or loss of funds.
+
+- Low impact, medium likelihood
+- Medium impact, low likelihood
+ Examples:
+- Attacker can sometimes put a charon node in a state that causes it to drop one out of every one hundred attestations made by a validator
+
+### Medium: up to $1,000
+
+A Medium-level vulnerability is one that has a moderate impact and requires a more significant effort to fix. Possible to have an impact on validator availability, integrity, and/or loss of funds.
+
+- High impact, low likelihood
+- Medium impact, medium likelihood
+- Low impact, high likelihood
+ Examples:
+- Attacker can successfully conduct eclipse attacks on the cluster nodes with peer-ids with 4 leading zero bytes.
+
+### High: up to $4,000
+
+A High-level vulnerability is one that has a significant impact on the security of the system and requires a significant effort to fix. Likely to have impact on availability, integrity, and/or loss of funds.
+
+- High impact, medium likelihood
+- Medium impact, high likelihood
+ Examples:
+- Attacker can successfully partition the cluster and keep the cluster offline.
+
+### Critical: up to $10,000
+
+A Critical-level vulnerability is one that has a severe impact on the security of the in-production system and requires immediate attention to fix. Highly likely to have a material impact on availability, integrity, and/or loss of funds.
+
+- High impact, high likelihood
+ Examples:
+- Attacker can successfully conduct remote code execution in charon client to exfiltrate BLS private key material.
+
+We may offer rewards in the form of cash, merchandise, or recognition. We will only award one reward per vulnerability discovered, and we reserve the right to deny a reward if we determine that the researcher has violated the terms and conditions of this policy.
+
+## Submission process
+
+Please email security@obol.tech
+
+Your report should include the following information:
+
+- Description of the vulnerability and its potential impact
+- Steps to reproduce the vulnerability
+- Proof of concept code, screenshots, or other supporting documentation
+- Your name, email address, and any contact information you would like to provide.
+ Reports that do not include sufficient detail will not be eligible for rewards.
+
+## Disclosure Policy
+
+Obol Labs will disclose the details of the vulnerability and the researcher’s identity (with their consent) only after we have remediated the vulnerability and issued a fix. Researchers must keep the details of the vulnerability confidential until Obol Labs has acknowledged and remediated the issue.
+
+## Legal Compliance
+
+All participants in the bug bounty program must comply with all applicable laws, regulations, and policy terms and conditions. Obol will not be held liable for any unlawful or unauthorised activities performed by participants in the bug bounty program.
+
+We will not take any legal action against security researchers who discover and report security vulnerabilities in accordance with this bug bounty policy. We do, however, reserve the right to take legal action against anyone who violates the terms and conditions of this policy.
+
+## Non-Disclosure Agreement
+
+All participants in the bug bounty program will be required to sign a non-disclosure agreement (NDA) before they are given access to closed source software and services for testing purposes.
diff --git a/docs/versioned_docs/version-v0.19.1/sec/contact.md b/docs/versioned_docs/version-v0.19.1/sec/contact.md
new file mode 100644
index 0000000000..e66e1663e2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sec/contact.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 3
+description: Security details for the Obol Network
+---
+
+# Contacts
+
+Please email security@obol.tech to report a security incident, vulnerability, bug or inquire about Obol's security.
+
+Also, visit the [obol security repo](https://github.com/ObolNetwork/obol-security) for more details.
diff --git a/docs/versioned_docs/version-v0.19.1/sec/ev-assessment.md b/docs/versioned_docs/version-v0.19.1/sec/ev-assessment.md
new file mode 100644
index 0000000000..a8ce756359
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sec/ev-assessment.md
@@ -0,0 +1,295 @@
+---
+sidebar_position: 4
+description: Software Development Security Assessment
+---
+
+# ev-assessment
+
+## Software Development at Obol
+
+When hardening a projects technical security, team member's operational security, and the security of the software development practices in use by the team are some of the most criticial areas to secure. Many hacks and compromises in the space to date have been a result of these attack vectors rather than exploits of the software itself.
+
+With this in mind, in January 2023 the Obol team retained the expertise of Ethereal Venture's security researcher Alex Wade; to interview key stakeholders and produce a report into the teams Software Development Lifecycle.
+
+The below page is a result of the report that was produced. What is present here has had some sensitive information redacted, and contains responses to the recommendations made, detailing the actions the Obol team have taken to mitigate what has been highlighted.
+
+## Obol Report
+
+**Prepared by: Alex Wade (Ethereal Ventures)** **Date: Jan 2023**
+
+Over the past month, I worked with Obol to review their software development practices in preparation for their upcoming security audits. My goals were to review and analyze:
+
+* Software development processes
+* Vulnerability disclosure and escalation procedures
+* Key personnel risk
+
+The information in this report was collected through a series of interviews with Obol’s project leads.
+
+### Contents:
+
+* Background Info
+* Analysis - Cluster Setup and DKG
+ * Key Risks
+ * Potential Attack Scenarios
+* Recommendations
+ * R1: Users should deploy cluster contracts through a known on-chain entry point
+ * R2: Users should deposit to the beacon chain through a pool contract
+ * R3: Raise the barrier to entry to push an update to the Launchpad
+* Additional Notes
+ * Vulnerability Disclosure
+ * Key Personnel Risk
+
+### Background Info
+
+**Each team lead was asked to describe Obol in terms of its goals, objectives, and key features.**
+
+#### What is Obol?
+
+Obol builds DVT (Distributed Validator Technology) for Ethereum.
+
+#### What is Obol’s goal?
+
+Obol’s goal is to solve a classic distributed systems problem: uptime.
+
+Rather than requiring Ethereum validators to stake on their own, Obol allows groups of operators to stake together. Using Obol, a single validator can be run cooperatively by multiple people across multiple machines.
+
+In theory, this architecture provides validators with some redundancy against common issues: server and power outages, client failures, and more.
+
+#### What are Obol’s objectives?
+
+Obol’s business objective is to provide base-layer infrastructure to support a distributed validator ecosystem. As Obol provides base layer technology, other companies and projects will build on top of Obol.
+
+Obol’s business model is to eventually capture a portion of the revenue generated by validators that use Obol infrastructure.
+
+#### What is Obol’s product?
+
+Obol’s product consists of three main components, each run by its own team: a webapp, a client, and smart contracts.
+
+* [DV Launchpad](../dvl/intro.md): A webapp to create and manage distributed validators.
+* [Charon](../charon/intro.md): A middleware client that enables operators to run distributed validators.
+* [Solidity](../sc/introducing-obol-splits.md): Withdrawal and fee recipient contracts for use with distributed validators.
+
+### Analysis - Cluster Setup and DKG
+
+The Launchpad guides users through the process of creating a cluster, which defines important parameters like the validator’s fee recipient and withdrawal addresses, as well as the identities of the operators in the cluster. In order to ensure their cluster configuration is correct, users need to rely on a few different factors.
+
+**First, users need to trust the Charon client** to perform the DKG correctly, and validate things like:
+
+* Config file is well-formed and is using the expected version
+* Signatures and ENRs from other operators are valid
+* Cluster config hash is correct
+* DKG succeeds in producing valid signatures
+* Deposit data is well-formed and is correctly generated from the cluster config and DKG.
+
+However, Charon’s validation is limited to the digital: signature checks, cluster file syntax, etc. It does NOT help would-be operators determine whether the other operators listed in their cluster definition are the real people with whom they intend to start a DVT cluster. So -
+
+**Second, users need to come to social consensus with fellow operators.** While the cluster is being set up, it’s important that each operator is an active participant. Each member of the group must validate and confirm that:
+
+* the cluster file correctly reflects their address and node identity, and reflects the information they received from fellow operators
+* the cluster parameters are expected – namely, the number of validators and signing threshold
+
+**Finally, users need to perform independent validation.** Each user should perform their own validation of the cluster definition:
+
+* Is my information correct? (address and ENR)
+* Does the information I received from the group match the cluster definition?
+* Is the ETH2 deposit data correct, and does it match the information in the cluster definition?
+* Are the withdrawal and fee recipient addresses correct?
+
+These final steps are potentially the most difficult, and may require significant technical knowledge.
+
+### Key Risks
+
+#### 1. Validation of Contract Deployment and Deposit Data Relies Heavily on Launchpad
+
+From my interviews, it seems that the user deploys both the withdrawal and fee recipient contracts through the Launchpad.
+
+What I’m picturing is that during the first parts of the cluster setup process, the user is prompted to sign one or more transactions deploying the withdrawal and fee recipient contracts to mainnet. The Launchpad apparently uses an npm package to deploy these contracts: `0xsplits/splits-sdk`, which I assume provides either JSON artifacts or a factory address on chain. The Launchpad then places the deployed contracts into the cluster config file, and the process moves on.
+
+If an attacker has published a malicious update to the Launchpad (or compromised an underlying dependency), the contracts deployed by the Launchpad may be malicious. The questions I’d like to pose are:
+
+* How does the group creator know the Launchpad deployed the correct contracts?
+* How does the rest of the group know the creator deployed the contracts through the Launchpad?
+
+My understanding is that this ultimately comes down to the independent verification that each of the group’s members performs during and after the cluster’s setup phase.
+
+At its worst, this verification might consist solely of the cluster creator confirming to the others that, yes, those addresses match the contracts I deployed through the Launchpad.
+
+A more sophisticated user might verify that not only do the addresses match, but the deployed source code looks roughly correct. However, this step is far out of the realm of many would-be validators. To be really certain that the source code is correct would require auditor-level knowledge.
+
+The risk is that:
+
+* the deployed contracts are NOT the correctly-configured 0xsplits waterfall/fee splitter contracts
+* most users are ill-equipped to make this determination themselves
+* we don’t want to trust the Launchpad as the single source of truth
+
+In the worst case, the cluster may end up depositing with malicious withdrawal or fee recipient credentials. If unnoticed, this may net an attacker the entire withdrawal amount, once the cluster exits.
+
+Note that the same (or similar) risks apply to validation of deposit data, which has the potential to be similarly difficult. I’m a little fuzzy on which part of the Obol stack actually generates the deposit data / deposit transaction, so I can’t speak to this as much. However, I think the mitigation for both of these is roughly the same - read on!
+
+**Mitigation:**
+
+It’s certainly a good idea to make it harder to deploy malicious updates to the Launchpad, but this may not be entirely possible. A higher-yield strategy may be to educate and empower users to perform independent validation of the DVT setup process - without relying on information fed to them by Charon and the Launchpad.
+
+I’ve outlined some ideas for this in #R1 and #R2.
+
+#### 2. Social Consensus, aka “Who sends the 32 ETH?”
+
+Depositing to the beacon chain requires a total of 32 ETH. Obol’s product allows multiple operators to act as a single validator together, which means would-be operators need to agree on how to fund the 32 ETH needed to initiate the deposit.
+
+It is my understanding that currently, this process comes down to trust and loose social consensus. Essentially, the group needs to decide who chips in what amount together, and then trust someone to take the 32 ETH and complete the deposit process correctly (without running away with the money).
+
+Granted, the initial launch of Obol will be open only to a small group of people as the kinks in the system get worked out - but in preparation for an eventual public release, the deposit process needs to be much simpler and far less reliant on trust.
+
+Mitigation: See #R2.
+
+**Potential Attack Scenarios**
+
+During the interview process, I learned that each of Obol’s core components has its own GitHub repo, and that each repo has roughly the same structure in terms of organization and security policies. For each repository:
+
+* There are two overall github organization administrators, and a number of people have administrative control over individual repositories.
+* In order to merge PRs, the submitter needs:
+ * CI/CD checks to pass
+ * Review from one person (anyone at Obol)
+
+Of course, admin access also means the ability to change these settings - so repo admins could theoretically merge PRs without needing checks to pass, and without review/approval, organization admins can control the full GitHub organization.
+
+The following scenarios describe the impact an attack may have.
+
+**1. Publishing a malicious version of the Launchpad, or compromising an underlying dependency**
+
+* Reward: High
+* Difficulty: Medium-Low
+
+As described in Key Risks, publishing a malicious version of the Launchpad has the potential to net the largest payout for an attacker. By tampering with the cluster’s deposit data or withdrawal/fee recipient contracts, an attacker stands to gain 32 ETH or more per compromised cluster.
+
+During the interviews, I learned that merging PRs to main in the Launchpad repo triggers an action that publishes to the site. Given that merges can be performed by an authorized Obol developer, this makes the developers prime targets for social engineering attacks.
+
+Additionally, the use of the `0xsplits/splits-sdk` NPM package to aid in contract deployment may represent a supply chain attack vector. It may be that this applies to other Launchpad dependencies as well.
+
+In any case, with a fairly large surface area and high potential reward, this scenario represents a credible risk to users during the cluster setup and DKG process.
+
+See #R1, #R2, and #R3 for some ideas to address this scenario.
+
+**2. Publishing a malicious version of Charon to new operators**
+
+* Reward: Medium
+* Difficulty: High
+
+During the cluster setup process, Charon is responsible both for validating the cluster configuration produced by the Launchpad, as well as performing a DKG ceremony between a group’s operators.
+
+If new operators use a malicious version of Charon to perform this process, it may be possible to tamper with both of these responsibilities, or even get access to part or all of the underlying validator private key created during DKG.
+
+However, the difficulty of this type of attack seems quite high. An attacker would first need to carry out the same type of social engineering attack described in scenario 1 to publish and tag a new version of Charon. Crucially, users would also need to install the malicious version - unlike the Launchpad, an update here is not pushed directly to users.
+
+As long as Obol is clear and consistent with communication around releases and versioning, it seems unlikely that a user would both install a brand-new, unannounced release, and finish the cluster setup process before being warned about the attack.
+
+**3. Publishing a malicious version of Charon to existing validators**
+
+* Reward: Low
+* Difficulty: High
+
+Once a distributed validator is up and running, much of the danger has passed. As a middleware client, Charon sits between a validator’s consensus and validator clients. As such, it shouldn’t have direct access to a validator’s withdrawal keys nor signing keys.
+
+If existing validators update to a malicious version of Charon, it’s likely the worst thing an attacker could theoretically do is slash the validator, however, assuming charon has no access to any private keys, this would be predicated on one or more validator clients connected to charon also failing to prevent the signing of a slashable message. In practice, a compromised charon client is more likely to pose liveness risks than safety risks.
+
+This is not likely to be particularly motivating to potential attackers - and paired with the high difficulty described above, this scenario seems unlikely to cause significant issues.
+
+### Recommendations
+
+#### R1: Users should deploy cluster contracts through a known on-chain entry point
+
+During setup, users should only sign one transaction via the Launchpad - to a contract located at an Obol-held ENS (e.g. `launchpad.obol.eth`). This contract should deploy everything needed for the cluster to operate, like the withdrawal and fee recipient contracts. It should also initialize them with the provided reward split configuration (and any other config needed).
+
+Rather than using an NPM library to supply a factory address or JSON artifacts, this has the benefit of being both:
+
+* **Harder to compromise:** as long as the user knows launchpad.obol.eth, it’s pretty difficult to trick them into deploying the wrong contracts.
+* **Easier to validate** for non-technical users: the Obol contract can be queried for deployment information via etherscan. For example:
+
+
+
+Note that in order for this to be successful, Obol needs to provide detailed steps for users to perform manual validation of their cluster setups. Users should be able to treat this as a “checklist:”
+
+* Did I send a transaction to `launchpad.obol.eth`?
+* Can I use the ENS name to locate and query the deployment manager contract on etherscan?
+* If I input my address, does etherscan report the configuration I was expecting?
+ * withdrawal address matches
+ * fee recipient address matches
+ * reward split configuration matches
+
+As long as these steps are plastered all over the place (i.e. not just on the Launchpad) and Obol puts in effort to educate users about the process, this approach should allow users to validate cluster configurations themselves - regardless of Launchpad or NPM package compromise.
+
+**Obol’s response:**
+
+Roadmapped: add the ability for the OWR factory to claim and transfer its reverse resolution ownership.
+
+#### R2: Users should deposit to the beacon chain through a pool contract
+
+Once cluster setup and DKG is complete, a group of operators should deposit to the beacon chain by way of a pool contract. The pool contract should:
+
+* Accept Eth from any of the group’s operators
+* Stop accepting Eth when the contract’s balance hits (32 ETH \* number of validators)
+* Make it easy to pull the trigger and deposit to the beacon chain once the critical balance has been reached
+* Offer all of the group’s operators a “bail” option at any point before the deposit is triggered
+
+Ideally, this contract is deployed during the setup process described in #R1, as another step toward allowing users to perform independent validation of the process.
+
+Rather than relying on social consensus, this should:
+
+* Allow operators to fund the validator without needing to trust any single party
+* Make it harder to mess up the deposit or send funds to some malicious actor, as the pool contract should know what the beacon deposit contract address is
+
+**Obol’s response:**
+
+Roadmapped: give the operators a streamlined, secure way to deposit Ether (ETH) to the beacon chain collectively, satisfying specific conditions:
+
+* Pooling from multiple operators.
+* Ceasing to accept ETH once a critical balance is reached, defined by 32 ETH multiplied by the number of validators.
+* Facilitating an immediate deposit to the beacon chain once the target balance is reached.
+* Provide a 'bail-out' option for operators to withdraw their contribution before initiating the group's deposit to the beacon chain.
+
+#### R3: Raise the barrier to entry to push an update to the Launchpad
+
+Currently, any repo admin can publish an update to the Launchpad unchecked.
+
+Given the risks and scenarios outlined above, consider amending this process so that the sole compromise of either admin is not sufficient to publish to the Launchpad site. It may be worthwhile to require both admins to approve publishing to the site.
+
+Along with simply adding additional prerequisites to publish an update to the Launchpad, ensure that both admins have enabled some level of multi-factor authentication on their GitHub accounts.
+
+**Obol’s response:**
+
+We removed individual’s ability to merge changes without review, enforced MFA, signed commits, and employed Bulldozer bot to make sure a PR gets merged automatically when all checks pass.
+
+### Additional Notes
+
+#### Vulnerability Disclosure
+
+During the interviews, I got some conflicting information when asking about Obol’s vulnerability disclosure process.
+
+Some interviewees directed me towards Obol’s security repo, which details security contacts: [ObolNetwork/obol-security](https://github.com/ObolNetwork/obol-security), while some answered that disclosure should happen primarily through Immunefi. While these may both be part of the correct answer, it seems that Obol’s disclosure process may not be as well-defined as it could be. Here are some notes:
+
+* I wasn’t able to find information about Obol on Immunefi. I also didn’t find any reference to a security contact or disclosure policy in Obol’s docs.
+* When looking into the obol security repo, I noticed broken links in a few of the sections in README.md and SECURITY.md:
+ * Security policy
+ * More Information
+* Some of the text and links in the Bug Bounty Program don’t seem to apply to Obol (see text referring to Vaults and Strategies).
+* The Receiving Disclosures section does not include a public key with which submitters can encrypt vulnerability information.
+
+It’s my understanding that these items are probably lower priority due to Obol’s initial closed launch - but these should be squared away soon! \[Obol response to latest vuln disclosure process goes here]
+
+**Obol’s response:**
+
+we addressed all of the concerns in the obol-security repository:
+
+1. The security policy link has been fixed
+2. The Bug Bounty program received an overhaul and clearly states rewards, eligibility, and scope
+3. We list two GPG public keys for which we accept encrypted vulnerabilities reports.
+
+We are actively working towards integrating Immunefi in our security pipeline.
+
+#### Key Personnel Risk
+
+A final section on the specifics of key personnel risk faced by Obol has been redacted from the original report. Particular areas of control highlighted were github org ownership and domain name control.
+
+**Obol’s response:**
+
+These risks have been mitigated by adding an extra admin to the github org, and by setting up a second DNS stack in case the primary one fails, along with general Opsec improvements.
diff --git a/docs/versioned_docs/version-v0.19.1/sec/overview.md b/docs/versioned_docs/version-v0.19.1/sec/overview.md
new file mode 100644
index 0000000000..1cc6f1fe36
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sec/overview.md
@@ -0,0 +1,33 @@
+---
+sidebar_position: 1
+description: Security Overview
+---
+
+# Overview
+
+This page serves as an overview of the Obol Network from a security point of view.
+
+This page is updated quarterly. The last update was on 2023-10-01.
+
+## Table of Contents
+
+1. [List of Security Audits and Assessments](overview.md#list-of-security-audits-and-assessments)
+2. [Security Focused Documents](overview.md#security-focused-documents)
+3. [Bug Bounty Details](bug-bounty.md)
+
+## List of Security Audits and Assessments
+
+The completed audits reports are linked [here](https://github.com/ObolNetwork/obol-security/tree/main/audits).
+
+* A review of Obol Labs [development processes](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/sec/ev-assessment/README.md) by Ethereal Ventures
+* A [security assessment](https://github.com/ObolNetwork/obol-security/blob/f9d7b0ad0bb8897f74ccb34cd4bd83012ad1d2b5/audits/Sigma_Prime_Obol_Network_Charon_Security_Assessment_Report_v2_1.pdf) of Charon by [Sigma Prime](https://sigmaprime.io/).
+* A [solidity audit](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/sec/smart_contract_audit/README.md) of the Obol Splits contracts by [Zach Obront](https://zachobront.com/).
+* A second audit of Charon is planned for Q4 2023.
+
+## Security focused documents
+
+* A [threat model](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/sec/threat_model/README.md) for a DV middleware client like charon.
+
+## Bug Bounty
+
+Information related to disclosing bugs and vulnerabilities to Obol can be found on [the next page](bug-bounty.md).
diff --git a/docs/versioned_docs/version-v0.19.1/sec/smart_contract_audit.md b/docs/versioned_docs/version-v0.19.1/sec/smart_contract_audit.md
new file mode 100644
index 0000000000..5f079f2997
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sec/smart_contract_audit.md
@@ -0,0 +1,477 @@
+---
+sidebar_position: 5
+description: Smart Contract Audit
+---
+
+# Smart Contract Audit
+
+| | |
+| ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|  | Obol Audit Report
Obol Manager Contracts
Prepared by: Zach Obront, Independent Security Researcher
Date: Sept 18 to 22, 2023
|
+
+## About **Obol**
+
+The Obol Network is an ecosystem for trust minimized staking that enables people to create, test, run & co-ordinate distributed validators.
+
+The Obol Manager contracts are responsible for distributing validator rewards and withdrawals among the validator and node operators involved in a distributed validator.
+
+## About **zachobront**
+
+Zach Obront is an independent smart contract security researcher. He serves as a Lead Senior Watson at Sherlock, a Security Researcher at Spearbit, and has identified multiple critical severity bugs in the wild, including in a Top 5 Protocol on Immunefi. You can say hi on Twitter at [@zachobront](http://twitter.com/zachobront).
+
+## Summary & Scope
+
+The [ObolNetwork/obol-manager-contracts](https://github.com/ObolNetwork/obol-manager-contracts/) repository was audited at commit [50ce277919723c80b96f6353fa8d1f8facda6e0e](https://github.com/ObolNetwork/obol-manager-contracts/tree/50ce277919723c80b96f6353fa8d1f8facda6e0e).
+
+The following contracts were in scope:
+
+* src/controllers/ImmutableSplitController.sol
+* src/controllers/ImmutableSplitControllerFactory.sol
+* src/lido/LidoSplit.sol
+* src/lido/LidoSplitFactory.sol
+* src/owr/OptimisticWithdrawalReceiver.sol
+* src/owr/OptimisticWithdrawalReceiverFactory.sol
+
+After completion of the fixes, the [2f4f059bfd145f5f05d794948c918d65d222c3a9](https://github.com/ObolNetwork/obol-manager-contracts/tree/2f4f059bfd145f5f05d794948c918d65d222c3a9) commit was reviewed. After this review, the updated Lido fee share system in [PR #96](https://github.com/ObolNetwork/obol-manager-contracts/pull/96/files) (at commit [fd244a05f964617707b0a40ebb11b523bbd683b8](https://github.com/ObolNetwork/obol-splits/pull/96/commits/fd244a05f964617707b0a40ebb11b523bbd683b8)) was reviewed.
+
+## Summary of Findings
+
+| Identifier | Title | Severity | Fixed |
+| :-----------------------------------------------------------------------------------------------------------------------: | -------------------------------------------------------------------------------------- | :-----------: | :---: |
+| [M-01](smart_contract_audit.md#m-01-future-fees-may-be-skirted-by-setting-a-non-eth-reward-token) | Future fees may be skirted by setting a non-ETH reward token | Medium | ✓ |
+| [M-02](smart_contract_audit.md#m-02-splits-with-256-or-more-node-operators-will-not-be-able-to-switch-on-fees) | Splits with 256 or more node operators will not be able to switch on fees | Medium | ✓ |
+| [M-03](smart_contract_audit.md#m-03-in-a-mass-slashing-event-node-operators-are-incentivized-to-get-slashed) | In a mass slashing event, node operators are incentivized to get slashed | Medium | |
+| [L-01](smart_contract_audit.md#l-01-obol-fees-will-be-applied-retroactively-to-all-non-distributed-funds-in-the-splitter) | Obol fees will be applied retroactively to all non-distributed funds in the Splitter | Low | ✓ |
+| [L-02](smart_contract_audit.md#l-02-if-owr-is-used-with-rebase-tokens-and-theres-a-negative-rebase-principal-can-be-lost) | If OWR is used with rebase tokens and there's a negative rebase, principal can be lost | Low | ✓ |
+| [L-03](smart_contract_audit.md#l-03-lidosplit-can-receive-eth-which-will-be-locked-in-contract) | LidoSplit can receive ETH, which will be locked in contract | Low | ✓ |
+| [L-04](smart_contract_audit.md#l-04-upgrade-to-latest-version-of-solady-to-fix-libclone-bug) | Upgrade to latest version of Solady to fix LibClone bug | Low | ✓ |
+| [G-01](smart_contract_audit.md#g-01-steth-and-wsteth-addresses-can-be-saved-on-implementation-to-save-gas) | stETH and wstETH addresses can be saved on implementation to save gas | Gas | ✓ |
+| [G-02](smart_contract_audit.md#g-02-owr-can-be-simplified-and-save-gas-by-not-tracking-distributedfunds) | OWR can be simplified and save gas by not tracking distributedFunds | Gas | ✓ |
+| [I-01](smart_contract_audit.md#i-01-strong-trust-assumptions-between-validators-and-node-operators) | Strong trust assumptions between validators and node operators | Informational | |
+| [I-02](smart_contract_audit.md#i-02-provide-node-operator-checklist-to-validate-setup) | Provide node operator checklist to validate setup | Informational | |
+
+## Detailed Findings
+
+### \[M-01] Future fees may be skirted by setting a non-ETH reward token
+
+Fees are planned to be implemented on the `rewardRecipient` splitter by updating to a new fee structure using the `ImmutableSplitController`.
+
+It is assumed that all rewards will flow through the splitter, because (a) all distributed rewards less than 16 ETH are sent to the `rewardRecipient`, and (b) even if a team waited for rewards to be greater than 16 ETH, rewards sent to the `principalRecipient` are capped at the `amountOfPrincipalStake`.
+
+This creates a fairly strong guarantee that reward funds will flow to the `rewardRecipient`. Even if a user were to set their `amountOfPrincipalStake` high enough that the `principalRecipient` could receive unlimited funds, the Obol team could call `distributeFunds()` when the balance got near 16 ETH to ensure fees were paid.
+
+However, if the user selects a non-ETH token, all ETH will be withdrawable only thorugh the `recoverFunds()` function. If they set up a split with their node operators as their `recoveryAddress`, all funds will be withdrawable via `recoverFunds()` without ever touching the `rewardRecipient` or paying a fee.
+
+#### Recommendation
+
+I would recommend removing the ability to use a non-ETH token from the `OptimisticWithdrawalRecipient`. Alternatively, if it feels like it may be a use case that is needed, it may make sense to always include ETH as a valid token, in addition to any `OWRToken` set.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[M-02] Splits with 256 or more node operators will not be able to switch on fees
+
+0xSplits is used to distribute rewards across node operators. All Splits are deployed with an ImmutableSplitController, which is given permissions to update the split one time to add a fee for Obol at a future date.
+
+The Factory deploys these controllers as Clones with Immutable Args, hard coding the `owner`, `accounts`, `percentAllocations`, and `distributorFee` for the future update. This data is packed as follows:
+
+```solidity
+ function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+ ) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
+ uint256[] memory recipients = new uint[](recipientsSize);
+
+ uint256 i = 0;
+ for (; i < recipientsSize;) {
+ recipients[i] = (uint256(percentAllocations[i]) << ADDRESS_BITS) | uint256(uint160(accounts[i]));
+
+ unchecked {
+ i++;
+ }
+ }
+
+ data = abi.encodePacked(splitMain, distributorFee, owner, uint8(recipientsSize), recipients);
+ }
+```
+
+In the process, `recipientsSize` is unsafely downcasted into a `uint8`, which has a maximum value of `256`. As a result, any values greater than 256 will overflow and result in a lower value of `recipients.length % 256` being passed as `recipientsSize`.
+
+When the Controller is deployed, the full list of `percentAllocations` is passed to the `validSplit` check, which will pass as expected. However, later, when `updateSplit()` is called, the `getNewSplitConfiguation()` function will only return the first `recipientsSize` accounts, ignoring the rest.
+
+```solidity
+ function getNewSplitConfiguration()
+ public
+ pure
+ returns (address[] memory accounts, uint32[] memory percentAllocations)
+ {
+ // fetch the size first
+ // then parse the data gradually
+ uint256 size = _recipientsSize();
+ accounts = new address[](size);
+ percentAllocations = new uint32[](size);
+
+ uint256 i = 0;
+ for (; i < size;) {
+ uint256 recipient = _getRecipient(i);
+ accounts[i] = address(uint160(recipient));
+ percentAllocations[i] = uint32(recipient >> ADDRESS_BITS);
+ unchecked {
+ i++;
+ }
+ }
+ }
+```
+
+When `updateSplit()` is eventually called on `splitsMain` to turn on fees, the `validSplit()` check on that contract will revert because the sum of the percent allocations will no longer sum to `1e6`, and the update will not be possible.
+
+#### Proof of Concept
+
+The following test can be dropped into a file in `src/test` to demonstrate that passing 400 accounts will result in a `recipientSize` of `400 - 256 = 144`:
+
+```solidity
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.0;
+
+import { Test } from "forge-std/Test.sol";
+import { console } from "forge-std/console.sol";
+import { ImmutableSplitControllerFactory } from "src/controllers/ImmutableSplitControllerFactory.sol";
+import { ImmutableSplitController } from "src/controllers/ImmutableSplitController.sol";
+
+interface ISplitsMain {
+ function createSplit(address[] calldata accounts, uint32[] calldata percentAllocations, uint32 distributorFee, address controller) external returns (address);
+}
+
+contract ZachTest is Test {
+ function testZach_RecipientSizeCappedAt256Accounts() public {
+ vm.createSelectFork("https://mainnet.infura.io/v3/fb419f740b7e401bad5bec77d0d285a5");
+
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](400);
+ uint32[] memory bigPercentAllocations = new uint32[](400);
+
+ for (uint i = 0; i < 400; i++) {
+ bigAccounts[i] = address(uint160(i));
+ bigPercentAllocations[i] = 2500;
+ }
+
+ // confirmation that 0xSplits will allow creating a split with this many accounts
+ // dummy acct passed as controller, but doesn't matter for these purposes
+ address split = ISplitsMain(0x2ed6c4B5dA6378c7897AC67Ba9e43102Feb694EE).createSplit(bigAccounts, bigPercentAllocations, 0, address(8888));
+
+ ImmutableSplitController controller = factory.createController(split, owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+
+ // added a public function to controller to read recipient size directly
+ uint savedRecipientSize = controller.ZachTest__recipientSize();
+ assert(savedRecipientSize < 400);
+ console.log(savedRecipientSize); // 144
+ }
+}
+```
+
+#### Recommendation
+
+When packing the data in `_packSplitControllerData()`, check `recipientsSize` before downcasting to a uint8:
+
+```diff
+function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
++ if (recipientsSize > 256) revert InvalidSplit__TooManyAccounts(recipientSize);
+ ...
+}
+```
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[M-03] In a mass slashing event, node operators are incentivized to get slashed
+
+When the `OptimisticWithdrawalRecipient` receives funds from the beacon chain, it uses the following rule to determine the allocation:
+
+> If the amount of funds to be distributed is greater than or equal to 16 ether, it is assumed that it is a withdrawal (to be returned to the principal, with a cap on principal withdrawals of the total amount they deposited).
+
+> Otherwise, it is assumed that the funds are rewards.
+
+This value being as low as 16 ether protects against any predictable attack the node operator could perform. For example, due to the effect of hysteresis in updating effective balances, it does not seem to be possible for node operators to predictably bleed a withdrawal down to be below 16 ether (even if they timed a slashing perfectly).
+
+However, in the event of a mass slashing event, slashing punishments can be much more severe than they otherwise would be. To calculate the size of a slash, we:
+
+* take the total percentage of validator stake slashed in the 18 days preceding and following a user's slash
+* multiply this percentage by 3 (capped at 100%)
+* the full slashing penalty for a given validator equals 1/32 of their stake, plus the resulting percentage above applied to the remaining 31/32 of their stake
+
+In order for such penalties to bring the withdrawal balance below 16 ether (assuming a full 32 ether to start), we would need the percentage taken to be greater than `15 / 31 = 48.3%`, which implies that `48.3 / 3 = 16.1%` of validators would need to be slashed.
+
+Because the measurement is taken from the 18 days before and after the incident, node operators would have the opportunity to see a mass slashing event unfold, and later decide that they would like to be slashed along with it.
+
+In the event that they observed that greater than 16.1% of validators were slashed, Obol node operators would be able to get themselves slashed, be exited with a withdrawal of less than 16 ether, and claim that withdrawal as rewards, effectively stealing from the principal recipient.
+
+#### Recommendations
+
+Find a solution that provides a higher level of guarantee that the funds withdrawn are actually rewards, and not a withdrawal.
+
+#### Review
+
+Acknowledged. We believe this is a black swan event. It would require a major ETH client to be compromised, and would be a betrayal of trust, so likely not EV+ for doxxed operators. Users of this contract with unknown operators should be wary of such a risk.
+
+### \[L-01] Obol fees will be applied retroactively to all non-distributed funds in the Splitter
+
+When Obol decides to turn on fees, a call will be made to `ImmutableSplitController::updateSplit()`, which will take the predefined split parameters (the original user specified split with Obol's fees added in) and call `updateSplit()` to implement the change.
+
+```solidity
+function updateSplit() external payable {
+ if (msg.sender != owner()) revert Unauthorized();
+
+ (address[] memory accounts, uint32[] memory percentAllocations) = getNewSplitConfiguration();
+
+ ISplitMain(splitMain()).updateSplit(split, accounts, percentAllocations, uint32(distributorFee()));
+}
+```
+
+If we look at the code on `SplitsMain`, we can see that this `updateSplit()` function is applied retroactively to all funds that are already in the split, because it updates the parameters without performing a distribution first:
+
+```solidity
+function updateSplit(
+ address split,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+)
+ external
+ override
+ onlySplitController(split)
+ validSplit(accounts, percentAllocations, distributorFee)
+{
+ _updateSplit(split, accounts, percentAllocations, distributorFee);
+}
+```
+
+This means that any funds that have been sent to the split but have not yet be distributed will be subject to the Obol fee. Since these splitters will be accumulating all execution layer fees, it is possible that some of them may have received large MEV bribes, where this after-the-fact fee could be quite expensive.
+
+#### Recommendation
+
+The most strict solution would be for the `ImmutableSplitController` to store both the old split parameters and the new parameters. The old parameters could first be used to call `distributeETH()` on the split, and then `updateSplit()` could be called with the new parameters.
+
+If storing both sets of values seems too complex, the alternative would be to require that `split.balance <= 1` to update the split. Then the Obol team could simply store the old parameters off chain to call `distributeETH()` on each split to "unlock" it to update the fees.
+
+(Note that for the second solution, the ETH balance should be less than or equal to 1, not 0, because 0xSplits stores empty balances as `1` for gas savings.)
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[L-02] If OWR is used with rebase tokens and there's a negative rebase, principal can be lost
+
+The `OptimisticWithdrawalRecipient` is deployed with a specific token immutably set on the clone. It is presumed that that token will usually be ETH, but it can also be an ERC20 to account for future integrations with tokenized versions of ETH.
+
+In the event that one of these integrations used a rebasing version of ETH (like `stETH`), the architecture would need to be set up as follows:
+
+`OptimisticWithdrawalRecipient => rewards to something like LidoSplit.sol => Split Wallet`
+
+In this case, the OWR would need to be able to handle rebasing tokens.
+
+In the event that rebasing tokens are used, there is the risk that slashing or inactivity leads to a period with a negative rebase. In this case, the following chain of events could happen:
+
+* `distribute(PULL)` is called, setting `fundsPendingWithdrawal == balance`
+* rebasing causes the balance to decrease slightly
+* `distribute(PULL)` is called again, so when `fundsToBeDistributed = balance - fundsPendingWithdrawal` is calculated in an unchecked block, it ends up being near `type(uint256).max`
+* since this is more than `16 ether`, the first `amountOfPrincipalStake - _claimedPrincipalFunds` will be allocated to the principal recipient, and the rest to the reward recipient
+* we check that `endingDistributedFunds <= type(uint128).max`, but unfortunately this check misses the issue, because only `fundsToBeDistributed` underflows, not `endingDistributedFunds`
+* `_claimedPrincipalFunds` is set to `amountOfPrincipalStake`, so all future claims will go to the reward recipient
+* the `pullBalances` for both recipients will be set higher than the balance of the contract, and so will be unusable
+
+In this situation, the only way for the principal to get their funds back would be for the full `amountOfPrincipalStake` to hit the contract at once, and for them to call `withdraw()` before anyone called `distribute(PUSH)`. If anyone was to be able to call `distribute(PUSH)` before them, all principal would be sent to the reward recipient instead.
+
+#### Recommendation
+
+Similar to #74, I would recommend removing the ability for the `OptimisticWithdrawalRecipient` to accept non-ETH tokens.
+
+Otherwise, I would recommend two changes for redundant safety:
+
+1. Do not allow the OWR to be used with rebasing tokens.
+2. Move the `_fundsToBeDistributed = _endingDistributedFunds - _startingDistributedFunds;` out of the unchecked block. The case where `_endingDistributedFunds` underflows is already handled by a later check, so this one change should be sufficient to prevent any risk of this issue.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[L-03] LidoSplit can receive ETH, which will be locked in contract
+
+Each new `LidoSplit` is deployed as a clone, which comes with a `receive()` function for receiving ETH.
+
+However, the only function on `LidoSplit` is `distribute()`, which converts `stETH` to `wstETH` and transfers it to the `splitWallet`.
+
+While this contract should only be used for Lido to pay out rewards (which will come in `stETH`), it seems possible that users may accidentally use the same contract to receive other validator rewards (in ETH), or that Lido governance may introduce ETH payments in the future, which would cause the funds to be locked.
+
+#### Proof of Concept
+
+The following test can be dropped into `LidoSplit.t.sol` to confirm that the clones can currently receive ETH:
+
+```solidity
+function testZach_CanReceiveEth() public {
+ uint before = address(lidoSplit).balance;
+ payable(address(lidoSplit)).transfer(1 ether);
+ assertEq(address(lidoSplit).balance, before + 1 ether);
+}
+```
+
+#### Recommendation
+
+Introduce an additional function to `LidoSplit.sol` which wraps ETH into stETH before calling `distribute()`, in order to rescue any ETH accidentally sent to the contract.
+
+#### Review
+
+Fixed in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87/files) by adding a `rescueFunds()` function that can send ETH or any ERC20 (except `stETH` or `wstETH`) to the `splitWallet`.
+
+### \[L-04] Upgrade to latest version of Solady to fix LibClone bug
+
+In the recent [Solady audit](https://github.com/Vectorized/solady/blob/main/audits/cantina-solady-report.pdf), an issue was found the affects LibClone.
+
+In short, LibClone assumes that the length of the immutable arguments on the clone will fit in 2 bytes. If it's larger, it overlaps other op codes and can lead to strange behaviors, including causing the deployment to fail or causing the deployment to succeed with no resulting bytecode.
+
+Because the `ImmutableSplitControllerFactory` allows the user to input arrays of any length that will be encoded as immutable arguments on the Clone, we can manipulate the length to accomplish these goals.
+
+Fortunately, failed deployments or empty bytecode (which causes a revert when `init()` is called) are not problems in this case, as the transactions will fail, and it can only happen with unrealistically long arrays that would only be used by malicious users.
+
+However, it is difficult to be sure how else this risk might be exploited by using the overflow to jump to later op codes, and it is recommended to update to a newer version of Solady where the issue has been resolved.
+
+#### Proof of Concept
+
+If we comment out the `init()` call in the `createController()` call, we can see that the following test "successfully" deploys the controller, but the result is that there is no bytecode:
+
+```solidity
+function testZach__CreateControllerSoladyBug() public {
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](28672);
+ uint32[] memory bigPercentAllocations = new uint32[](28672);
+
+ for (uint i = 0; i < 28672; i++) {
+ bigAccounts[i] = address(uint160(i));
+ if (i < 32) bigPercentAllocations[i] = 820;
+ else bigPercentAllocations[i] = 34;
+ }
+
+ ImmutableSplitController controller = factory.createController(address(8888), owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+ assert(address(controller) != address(0));
+ assert(address(controller).code.length == 0);
+}
+```
+
+#### Recommendation
+
+Delete Solady and clone it from the most recent commit, or any commit after the fixes from [PR #548](https://github.com/Vectorized/solady/pull/548/files#diff-27a3ba4730de4b778ecba4697ab7dfb9b4f30f9e3666d1e5665b194fe6c9ae45) were merged.
+
+#### Review
+
+Solady has been updated to v.0.0.123 in [PR 88](https://github.com/ObolNetwork/obol-manager-contracts/pull/88).
+
+### \[G-01] stETH and wstETH addresses can be saved on implementation to save gas
+
+The `LidoSplitFactory` contract holds two immutable values for the addresses of the `stETH` and `wstETH` tokens.
+
+When new clones are deployed, these values are encoded as immutable args. This adds the values to the contract code of the clone, so that each time a call is made, they are passed as calldata along to the implementation, which reads the values from the calldata for use.
+
+Since these values will be consistent across all clones on the same chain, it would be more gas efficient to store them in the implementation directly, which can be done with `immutable` storage values, set in the constructor.
+
+This would save 40 bytes of calldata on each call to the clone, which leads to a savings of approximately 640 gas on each call.
+
+#### Recommendation
+
+1. Add the following to `LidoSplit.sol`:
+
+```solidity
+address immutable public stETH;
+address immutable public wstETH;
+```
+
+2. Add a constructor to `LidoSplit.sol` which sets these immutable values. Solidity treats immutable values as constants and stores them directly in the contract bytecode, so they will be accessible from the clones.
+3. Remove `stETH` and `wstETH` from `LidoSplitFactory.sol`, both as storage values, arguments to the constructor, and arguments to `clone()`.
+4. Adjust the `distribute()` function in `LidoSplit.sol` to read the storage values for these two addresses, and remove the helper functions to read the clone's immutable arguments for these two values.
+
+#### Review
+
+Fixed as recommended in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87).
+
+### \[G-02] OWR can be simplified and save gas by not tracking distributedFunds
+
+Currently, the `OptimisticWithdrawalRecipient` contract tracks four variables:
+
+* distributedFunds: total amount of the token distributed via push or pull
+* fundsPendingWithdrawal: total balance distributed via pull that haven't been claimed yet
+* claimedPrincipalFunds: total amount of funds claimed by the principal recipient
+* pullBalances: individual pull balances that haven't been claimed yet
+
+When `_distributeFunds()` is called, we perform the following math (simplified to only include relevant updates):
+
+```solidity
+endingDistributedFunds = distributedFunds - fundsPendingWithdrawal + currentBalance;
+fundsToBeDistributed = endingDistributedFunds - distributedFunds;
+distributedFunds = endingDistributedFunds;
+```
+
+As we can see, `distributedFunds` is added to the `endingDistributedFunds` variable and then removed when calculating `fundsToBeDistributed`, having no impact on the resulting `fundsToBeDistributed` value.
+
+The `distributedFunds` variable is not read or used anywhere else on the contract.
+
+#### Recommendation
+
+We can simplify the math and save substantial gas (a storage write plus additional operations) by not tracking this value at all.
+
+This would allow us to calculate `fundsToBeDistributed` directly, as follows:
+
+```solidity
+fundsToBeDistributed = currentBalance - fundsPendingWithdrawal;
+```
+
+#### Review
+
+Fixed as recommended in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85).
+
+### \[I-01] Strong trust assumptions between validators and node operators
+
+It is assumed that validators and node operators will always act in the best interest of the group, rather than in their selfish best interest.
+
+It is important to make clear to users that there are strong trust assumptions between the various parties involved in the DVT.
+
+Here are a select few examples of attacks that a malicious set of node operators could perform:
+
+1. Since there is currently no mechanism for withdrawals besides the consensus of the node operators, a minority of them sufficient to withhold consensus could blackmail the principal for a payment of up to 16 ether in order to allow them to withdraw. Otherwise, they could turn off their node operators and force the principal to bleed down to a final withdrawn balance of just over 16 ether.
+2. Node operators are all able to propose blocks within the P2P network, which are then propogated out to the rest of the network. Node software is accustomed to signing for blocks built by block builders based on the metadata including quantity of fees and the address they'll be sent to. This is enforced by social consensus, with block builders not wanting to harm validators in order to have their blocks accepted in the future. However, node operators in a DVT are not concerned with the social consensus of the network, and could therefore build blocks that include large MEV payments to their personal address (instead of the DVT's 0xSplit), add fictious metadata to the block header, have their fellow node operators accept the block, and take the MEV for themselves.
+3. While the withdrawal address is immutably set on the beacon chain to the OWR, the fee address is added by the nodes to each block. Any majority of node operators sufficient to reach consensus could create a new 0xSplit with only themselves on it, and use that for all execution layer fees. The principal (and other node operators) would not be able to stop them or withdraw their principal, and would be stuck with staked funds paying fees to the malicious node operators.
+
+Note that there are likely many other possible attacks that malicious node operators could perform. This report is intended to demonstrate some examples of the trust level that is needed between validators and node operators, and to emphasize the importance of making these assumptions clear to users.
+
+#### Review
+
+Acknowledged. We believe EIP 7002 will reduce this trust assumption as it would enable the validator exit via the execution layer withdrawal key.
+
+### \[I-02] Provide node operator checklist to validate setup
+
+There are a number of ways that the user setting up the DVT could plant backdoors to harm the other users involved in the DVT.
+
+Each of these risks is possible to check before signing off on the setup, but some are rather hidden, so it would be useful for the protocol to provide a list of checks that node operators should do before signing off on the setup parameters (or, even better, provide these checks for them through the front end).
+
+1. Confirm that `SplitsMain.getHash(split)` matches the hash of the parameters that the user is expecting to be used.
+2. Confirm that the controller clone delegates to the correct implementation. If not, it could be pointed to delegate to `SplitMain` and then called to `transferControl()` to a user's own address, allowing them to update the split arbitrarily.
+3. `OptimisticWithdrawalRecipient.getTranches()` should be called to check that `amountOfPrincipalStake` is equal to the amount that they will actually be providing.
+4. The controller's `owner` and future split including Obol fees should be provided to the user. They should be able to check that `ImmutableSplitControllerFactory.predictSplitControllerAddress()`, with those parameters inputted, results in the controller that is actually listed on `SplitsMain.getController(split)`.
+
+#### Review
+
+Acknowledged. We do some of these already (will add the remainder) automatically in the launchpad UI during the cluster confirmation phase by the node operator. We will also add it in markdown to the repo.
diff --git a/docs/versioned_docs/version-v0.19.1/sec/threat_model.md b/docs/versioned_docs/version-v0.19.1/sec/threat_model.md
new file mode 100644
index 0000000000..db9b4bff15
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/sec/threat_model.md
@@ -0,0 +1,155 @@
+---
+sidebar_position: 6
+description: Threat model for a Distributed Validator
+---
+
+# Charon threat model
+
+This page outlines a threat model for Charon, in the context of it being a Distributed Validator middleware for Ethereum validator clients.
+
+## Actors
+
+- Node owner (NO)
+- Cluster node operators (CNO)
+- Rogue node operator (RNO)
+- Outside attacker (OA)
+
+## General observations
+
+This page describes some considerations the Obol core team made about the security of a distributed validator in the context of its deployment and interaction with outside actors.
+
+The goal of this threat model is to provide transparency, but it is by no means a comprehensive audit or complete security reference. It’s a sharing of the experiences and thoughts we gained during the last few years building distributed validator technologies.
+
+While to the Beacon Chain, a distributed validator is seen in much the same way as a regular validator, and thus retains some of the same security considerations, Charon’s threat model is different from a validator client’s threat model because of its general design.
+
+While a validator client owns and operates on a set of validator private keys, the design of Charon allows its node operators to rarely (if ever) see the complete validator private keys, relying instead on modern cryptography to generate partial private key shares.
+
+An Ethereum distributed validator employs advanced signature primitives such that no operator ever handles the full validator private key in any standard lifecycle step: the [BLS digital signature scheme](https://en.wikipedia.org/wiki/BLS_digital_signature) employed by the Ethereum network allows distributed validators to individually sign a blob of data and then aggregate the resulting signatures in a transparent manner, never requiring any of the participating parties to know the full private key to do so.
+
+If the subset of the available Charon nodes is lower than a given threshold, the cluster is not able to continue with its duties.
+
+Given the collaborative nature of a Distributed Validator cluster, every operator must prioritize the liveliness and well-being of the cluster. Charon, at the moment of writing this page cannot reward and penalize operators within a cluster independently.
+
+This implies that Charon’s threat model can’t quite be equated to that of a single validator client, since they work on a different - albeit similar - set of security concepts.
+
+## Identity private key
+
+A distributed validator cluster is made up of a number of nodes, often run by a number of independent operators. For each DV cluster there’s a set of Ethereum validator private keys on which they want to validate on behalf of.
+
+Alongside those, each node (henceforth ‘operator’) holds an SECP256K1 identity private key, referred to as an ENR, that identifies their node to the other cluster operators’ nodes.
+
+Exfiltration of said private key could lead to possible impersonation from an outside attacker, possibly leading to intra-cluster peering issues, eclipse attack risks, and degraded validator performance.
+
+Charon client communication is handled via BFT consensus, which is able to tolerate a given number of misbehaving nodes up to a certain threshold: once this threshold is reached, the cluster is not able to continue with its lifecycle and loses liveness guarantees (the validator goes offline). If more than two-thirds of nodes in a cluster are malicious, a cluster also loses safety guarantees (enough bad actors could collude to come to consensus on something slashable).
+
+Identity private key theft and the subsequent execution of a rogue cluster node is equivalent in the context of BFT consensus to a misbehaving node, hence the cluster can survive and continue with its duties up to what’s specified by the cluster’s BFT protocol’s parameters.
+
+The likelihood of this happening is low: an OA with enough knowledge of the topology of the operator’s network must steal `fault tolerance of the cluster + 1` identity private keys and run Charon nodes to subvert the distributed validator BFT consensus to push the validator offline.
+
+## Ethereum validator private key access
+
+A distributed validator cluster executes Ethereum validator duties by acting as a middleman between the beacon chain and a validator client.
+
+To do so, the cluster must have knowledge of the Ethereum validator’s private key.
+
+The design and implementation of Charon minimizes the chances of this by splitting the Ethereum validator private keys into parts, which are then assigned to each node operator.
+A [distributed key generation](https://en.wikipedia.org/wiki/Distributed_key_generation) (DKG) process is used in order to evenly and safely create the private key shares without any central party having access to the full private key.
+
+The cryptography primitives employed in Charon can allow a threshold of the node operator’s private key shares to be reconstructed into the whole validator private key if needed.
+
+While the facilities to do this are present in the form of CLI commands, as stated before Charon never reconstructs the key in normal operations since BLS digital signature system allows for signature aggregation.
+
+A distributed validator cluster can be started in two ways:
+
+1. An existing Ethereum validator private key is split by the private key holder, and distributed in a trusted manner among the operators.
+2. The operators participate in a distributed key generation (DKG) process, to create private key shares that collectively can be used to sign validation duties as an Ethereum distributed validator. The full private key for the cluster never exists in one place during or after the DKG.
+
+In case 1, one of the node operators K has direct access to the Ethereum validator key and is tasked with the generation of other operator’s identity keys and key shards.
+
+It is clear that in this case the entirety of the sensitive material set is as secure as K’s environment; if K is compromised or malicious, the distributed validator could be slashed.
+
+Case 2 is different, because there’s no pre-existing Ethereum validator key in a single operator's hands: it will be generated using the FROST DKG algorithm.
+
+Assuming a successful DKG process, each operator will only ever handle its own key shares instead of the full Ethereum validator private key.
+
+A set of rogue operators composed of enough members to reconstruct the original Ethereum private keys might pose the risk of slashing for a distributed validator by colluding to produce slashable messages together.
+
+We deem this scenario’s likelihood as low, as it would mean that node operators decided to willfully slash the stake that they should be being rewarded for staking.
+
+Still, in the context of an outside attack, purposefully slashing a validator would mean stealing multiple operator key shares, which in turn means violating many cluster operator’s security almost at the same time. This scenario may occur if there is a 0-day vulnerability in a piece of software they all run or in case of node misconfiguration.
+
+## Rogue node operator
+
+Nodes are connected by means of either relay nodes, or directly to one another.
+
+Each node operator is at risk of being impeded by other nodes or by the relay operator in the execution of their duties.
+
+Nodes need to expose a set of TCP ports to be able to work, and the mere fact of doing that opens up the opportunity for rogue parties to execute DDoS attacks.
+
+Another attack surface for the cluster exists in rogue nodes purposefully filling the various inter-state databases with meaningless data, or more generally submitting bogus information to the other parties to slow down the processing or, in the case of a sybil attack, bring the cluster to a halt.
+
+The likelihood of this scenario is medium, because there’s no active threat hunting part: there’s no need for the rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle.
+
+## Outside attackers interfering with a cluster
+
+There are two levels of sophistication in an OA:
+
+1. No knowledge of the topology of the cluster: The attacker doesn’t know where each cluster node is located and so can’t force fault tolerance +1 nodes offline if it can’t find them.
+2. Knowledge of the topology of the network (or part of it) is possessed: the OA can operate DDoS attacks or try breaking into node’s servers - at that point, the “rogue node operator” scenario applies.
+
+The likelihood of this scenario is low: an OA needs extensive capabilities and sufficient incentive to be able to carry out an attack of this size.
+
+An outside attacker could also find and use vulnerabilities in the underlying cryptosystems and cryptography libraries used by Charon and other Ethereum clients. Forging signatures that fool Charon’s cryptographic library or other dependencies may be feasible, but forging signatures or otherwise finding a vulnerability in either the SECP256K1+ECDSA or BLS12-381+BLS cryptosystems we deem to be a low likelihood risk.
+
+## Malicious beacon nodes
+
+A malicious beacon node (BN) could prevent the distributed validator from operating its validation duties, and could plausibly increase the likelihood of slashing by serving charon illegitimate information.
+
+If the amount of nodes configured with the malicious BN are equal to the byzantine threshold for the Charon BFT consensus protocol, the validation process can potentially halt since the BFT parameter threshold is reached - most of the nodes are byzantine - the system will reach consensus on a set of data that isn’t valid.
+
+We deem the likelihood of this scenario to be medium depending on the trust model associated with the BNs deployment (cloud, self-hosted, SaaS product): node operators should always host or at least trust their own beacon nodes.
+
+## Malicious charon relays
+
+A Charon relay is used as a communication bridge between nodes that aren’t directly exposed on the Internet. It also acts as the peer discovery mechanism for a cluster.
+
+Once a peer’s IP address has been discovered via the relay, a direct connection can be attempted. Nodes can either communicate by exchanging data through a relay, or by using the relay as a means to establish a direct TCP connection to one another.
+
+A malicious relay owned by a OA could lead to:
+
+- Network topology discovery, facilitating the “outside attackers interactions with a cluster” scenario
+- Impeding node communication, potentially impacting the BFT consensus protocol liveness (not security) and distributed validator duties
+- DKG process disruption leading to frustration and potential abandonment by node operators: could lead to the usage of a standard Ethereum validator setup, which implies weaker security overall
+
+We note that BFT consensus liveness disruption can only happen if the number of nodes using the malicious relay for communication is equal to the byzantine nodes amount defined in the consensus parameters.
+
+This risk can be mitigated by configuring nodes with multiple relay URLs from only [trusted entities](../advanced/self-relay.md).
+
+The likelihood of this scenario is medium: Charon nodes are configured with a default set of relay nodes, so if an OA were to compromise those, it would lead to many cluster topologies getting discovered and potentially attacked and disrupted.
+
+## Compromised runtime files
+
+Charon operates with two runtime files:
+
+- A lock file used to address operator’s nodes, define the Ethereum validator public keys and the public key shares associated with it
+- A cluster definition file used to define the operator’s addresses and identities during the DKG process
+
+The lock file is signed and validated by all the nodes participating in the cluster: assuming good security practices on the node operator side, and no bugs in Charon or its dependencies’ implementations, this scenario is unlikely.
+
+If one or more node operators are using less than ideal security practices an OA could rewire the Charon CLI flags to include the `--no-verify` flags, which disables lock file signature and hash verification (usually intended only for development purposes).
+
+By doing that, the OA can edit the lock file as it sees fit, leading to the “rogue node operator” scenario. An OA or RNO might also manage to social engineer their way into convincing other operators into running their malicious lock file with verification disabled.
+
+The likelihood of this scenario is low: an OA would need to compromise every node operator through social engineering to both use a different set of files, and to run its cluster with `--no-verify`.
+
+## Conclusions
+
+Distributed Validator Technology (DVT) helps maintain a high-assurance environment for Ethereum validators by leveraging modern cryptography to ensure no single point of failure is easily found in the system.
+
+As with any computing system, security considerations are to be expected in order to keep the environment safe.
+
+From the point of view of an Ethereum validator entity, running their services with a DV client can help greatly with availability, minimizing slashing risks, and maximizing participation in the network.
+
+On the other hand, one must take into consideration the risks involved with dishonest cluster operators, as well as rogue third-party beacon nodes or relay providers.
+
+In the end, we believe the benefits of DVT greatly outweigh the potential threats described in this overview.
diff --git a/docs/versioned_docs/version-v0.19.1/start/README.md b/docs/versioned_docs/version-v0.19.1/start/README.md
new file mode 100644
index 0000000000..9952b96485
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/start/README.md
@@ -0,0 +1,2 @@
+# start
+
diff --git a/docs/versioned_docs/version-v0.19.1/start/activate-dv.md b/docs/versioned_docs/version-v0.19.1/start/activate-dv.md
new file mode 100644
index 0000000000..6c2bffd366
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/start/activate-dv.md
@@ -0,0 +1,41 @@
+---
+sidebar_position: 5
+description: Activate the Distributed Validator using the deposit contract
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Activate a DV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+If you have successfully created a distributed validator and you are ready to activate it, congratulations! 🎉
+
+Once you have connected all of your charon clients together, synced all of your ethereum nodes such that the monitoring indicates that they are all healthy and ready to operate, **ONE operator** may proceed to deposit and activate the validator(s).
+
+The `deposit-data.json` to be used to deposit will be located in each operator's `.charon` folder. The copies across every node should be identical and any of them can be uploaded.
+
+:::warning
+If you are being given a `deposit-data.json` file that you didn't generate yourself, please take extreme care to ensure this operator has not given you a malicious `deposit-data.json` file that is not the one you expect. Cross reference the files from multiple operators if there is any doubt. Activating the wrong validator or an invalid deposit could result in complete theft or loss of funds.
+:::
+
+Use any of the following tools to deposit. Please use the third-party tools at your own risk and always double check the staking contract address.
+
+* Obol Distributed Validator Launchpad
+* ethereum.org Staking Launchpad
+* From a SAFE Multisig:
+(Repeat these steps for every validator to deposit in your cluster)
+ * From the SAFE UI, click on New Transaction
then Transaction Builder
to create a new custom transaction
+ * Enter the beacon chain contract for Deposit on mainnet - you can find it here
+ * Fill the transaction information
+ * Set amount to 32
in ETH
+ * Use your deposit-data.json
to fill the required data : pubkey
,withdrawal credentials
,signature
,deposit_data_root
. Make sure to prefix the input with 0x
to format them in bytes
+ * Click on Add transaction
+ * Click on Create Batch
+ * Click on Send Batch
, you can click on Simulate to check if the transaction will execute successfully
+ * Get the minimum threshold of signatures from the other addresses and execute the custom transaction
+
+The activation process can take a minimum of 16 hours, with the maximum time to activation being dictated by the length of the activation queue, which can be weeks.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.1/start/quickstart-exit.md b/docs/versioned_docs/version-v0.19.1/start/quickstart-exit.md
new file mode 100644
index 0000000000..d815b2e191
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/start/quickstart-exit.md
@@ -0,0 +1,261 @@
+---
+sidebar_position: 7
+description: Exit a validator
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+# Exit a DV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas. In the case of a DV, each charon node needs to broadcast a partial exit to the other nodes of the cluster. Once a threshold of partial exits has been received by any node, the full voluntary exit will be sent to the beacon chain.
+
+This process will take 27 hours or longer depending on the current length of the exit queue.
+
+:::info
+
+- A threshold of operators needs to run the exit command for the exit to succeed.
+- If a charon client restarts after the exit command is run but before the threshold is reached, it will lose the partial exits it has received from the other nodes. If all charon clients restart and thus all partial exits are lost before the required threshold of exit messages are received, operators will have to rebroadcast their partial exit messages.
+ :::
+
+## Run the `voluntary-exit` command on your validator client
+
+Run the appropriate command on your validator client to broadcast an exit message from your validator client to its upstream charon client.
+
+It needs to be the validator client that is connected to your charon client taking part in the DV, as you are only signing a partial exit message, with a partial private key share, which your charon client will combine with the other partial exit messages from the other operators.
+
+:::info
+
+- All operators need to use the same `EXIT_EPOCH` for the exit to be successful. Assuming you want to exit as soon as possible, the default epoch of `162304` included in the below commands should be sufficient.
+- Partial exits can be broadcasted by any validator client as long as the sum reaches the threshold for the cluster.
+ :::
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=162304`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=162304
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=162304 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=162304
: The epoch upon which to submit the voluntary exit.
+ --network=goerli
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=162304 --network=goerli --yes'`}
+
+
+
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=256`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=256
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=256 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=256
: The epoch upon which to submit the voluntary exit.
+ --network=holesky
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=256 --network=holesky --yes'`}
+
+
+
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=194048`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=194048
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=194048 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=194048
: The epoch upon which to submit the voluntary exit.
+ --network=mainnet
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=194048 --network=mainnet --yes'`}
+
+
+
+
+
+
+
+Once a threshold of exit signatures has been received by any single charon client, it will craft a valid voluntary exit message and will submit it to the beacon chain for inclusion. You can monitor partial exits stored by each node in the [Grafana Dashboard](https://github.com/ObolNetwork/charon-distributed-validator-node).
+
+## Exit epoch and withdrawable epoch
+
+The process of a validator exiting from staking takes variable amounts of time, depending on how many others are exiting at the same time.
+
+Immediately upon broadcasting a signed voluntary exit message, the exit epoch and withdrawable epoch values are calculated based off the current epoch number. These values determine exactly when the validator will no longer be required to be online performing validation, and when the validator is eligible for a full withdrawal respectively.
+
+1. Exit epoch - epoch at which your validator is no longer active, no longer earning rewards, and is no longer subject to slashing rules.
+ :::warning
+ Up until this epoch (while "in the queue") your validator is expected to be online and is held to the same slashing rules as always. Do not turn your DV node off until this epoch is reached.
+ :::
+2. Withdrawable epoch - epoch at which your validator funds are eligible for a full withdrawal during the next validator sweep.
+ This occurs 256 epochs after the exit epoch, which takes ~27.3 hours.
+
+## How to verify a validator exit
+
+Consult the examples below and compare them to your validator's monitoring to verify that exits from each operator in the cluster are being received. This example is a cluster of 4 nodes with 2 validators and threshold of 3 nodes broadcasting exits are needed.
+
+1. Operator 1 broadcasts an exit on validator client 1.
+ 
+ 
+2. Operator 2 broadcasts an exit on validator client 2.
+ 
+ 
+3. Operator 3 broadcasts an exit on validator client 3.
+ 
+ 
+
+At this point, the threshold of 3 has been reached and the validator exit process will start. The logs will show the following:
+
+
+:::tip
+Once a validator has broadcasted an exit message, it must continue to validate for at least 27 hours or longer. Do not shut off your distributed validator nodes until your validator is fully exited.
+:::
diff --git a/docs/versioned_docs/version-v0.19.1/start/quickstart_alone.md b/docs/versioned_docs/version-v0.19.1/start/quickstart_alone.md
new file mode 100644
index 0000000000..2bf62005b3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/start/quickstart_alone.md
@@ -0,0 +1,159 @@
+---
+sidebar_position: 3
+description: Create a DV alone
+---
+
+# quickstart\_alone
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Create a DV alone
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+:::info It is possible for a single operator to manage all of the nodes of a DV cluster. The nodes can be run on a single machine, which is only suitable for testing, or the nodes can be run on multiple machines, which is expected for a production setup.
+
+The private key shares can be created centrally and distributed securely to each node. Alternatively, the private key shares can be created in a lower-trust manner with a [Distributed Key Generation](../int/key-concepts.md#distributed-validator-key-generation-ceremony) process, which avoids the validator private key being stored in full anywhere, at any point in its lifecycle. Follow the [group quickstart](quickstart_group.md) instead for this latter case. :::
+
+### Pre-requisites
+
+* A basic [knowledge](https://docs.ethstaker.cc/ethstaker-knowledge-base/) of Ethereum nodes and validators.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Make sure `docker` is running before executing the commands below.
+
+### Step 1: Create the key shares locally
+
+Go to the the [DV Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/docs/dvl/intro/README.md#dv-launchpad-links) and select `Create a distributed validator alone`. Follow the steps to configure your DV cluster. The Launchpad will give you a docker command to create your cluster.\
+Before you run the command, checkout the [Quickstart Alone](https://github.com/ObolNetwork/charon-distributed-validator-cluster.git) demo repo and `cd` into the directory.
+
+```bash
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+# Change directory
+cd charon-distributed-validator-cluster/
+
+# Run the command provided in the DV Launchpad "Create a cluster alone" flow
+docker run -u $(id -u):$(id -g) --rm -v "$(pwd)/:/opt/charon" obolnetwork/charon:v0.19.1 create cluster --definition-file=...
+```
+
+1. Clone the [Quickstart Alone](https://github.com/ObolNetwork/charon-distributed-validator-cluster) demo repo and `cd` into the directory.
+
+```bash
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+# Change directory
+cd charon-distributed-validator-cluster/
+```
+
+2. Run the cluster creation command, setting required flag values.
+
+Run the below command to create the validator private key shares and cluster artifacts locally, replacing the example values for `nodes`, `network`, `num-validators`, `fee-recipient-addresses`, and `withdrawal-addresses`. Check the [Charon CLI reference](../charon/charon-cli-reference.md#create-a-full-cluster-locally) for additional, optional flags to set.
+
+```bash
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.1 create cluster --nodes=4 --network=holesky --num-validators=1 --name="Quickstart Guide Cluster" --cluster-dir="cluster" --fee-recipient-addresses=0x000000000000000000000000000000000000dead --withdrawal-addresses=0x000000000000000000000000000000000000dead
+```
+
+:::tip If you would like your cluster to appear on the [DV Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/dvl/intro/README.md), add the `--publish` flag to the command. :::
+
+\
+
+
+After the `create cluster` command is run, you should have multiple subfolders within the newly created `./cluster/` folder, one for each node created.
+
+**Backup the `./cluster/` folder, then move on to deploying the cluster.**
+
+:::info Make sure your backup is secure and private, someone with access to these files could get the validators slashed. :::
+
+### Step 2: Deploy and start the nodes
+
+:::warning This part of the guide only runs one Execution Client, one Consensus Client, and 6 Distributed Validator Charon Client + Validator Client pairs on a single docker instance, and **is not suitable for a mainnet deployment**. (If this machine fails, there will not be any fault tolerance - the cluster will also fail.)
+
+For a production deployment with fault tolerance, follow the part of the guide instructing you how to distribute the nodes across multiple machines. :::
+
+Run this command to start your cluster containers if you deployed using CDVC repo above.
+
+```sh
+# Start the distributed validator cluster
+docker compose up --build -d
+```
+
+Check the monitoring dashboard and see if things look all right
+
+```sh
+# Open Grafana
+open http://localhost:3000/d/laEp8vupp
+```
+
+:::warning To distribute your cluster across multiple machines, each node in the cluster needs one of the folders called `node*/` to be copied to it. Each folder should be copied to a CDVN repo and renamed from `node*` to `.charon`.
+
+Right now, the `charon create cluster` command [used earlier to create the private keys](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/start/quickstart_alone/README.md#step-1-create-the-key-shares-locally) outputs a folder structure like `cluster/node*/`. Make sure to grab the `./node*/` folders, _rename_ them to `.charon` and then move them to one of the single node repos below. Once all nodes are online, synced, and connected, you will be ready to activate your validator. :::
+
+This is necessary for the folder to be found by the default `charon run` command. Optionally, it is possible to override `charon run`'s default file locations by using `charon run --private-key-file="node0/charon-enr-private-key" --lock-file="node0/cluster-lock.json"` for each instance of charon you start (substituting `node0` for each node number in your cluster as needed).
+
+:point\_right: Use the single node [docker compose](https://github.com/ObolNetwork/charon-distributed-validator-node), the kubernetes [manifests](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node), or the [helm chart](https://github.com/ObolNetwork/helm-charts) example repos to get your nodes up and connected after loading the `.charon` folder artifacts into them appropriately.\
+
+
+```log
+
+cluster
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ └── keystore-0.txt
+ ├── keystore-N.json
+ └── keystore-N.txt
+
+```
+
+```log
+└── .charon
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── ...
+ ├── keystore-N.json
+ └── keystore-N.txt
+```
+
+:::info Currently, the quickstart repo installs a node on the Holesky testnet. It is possible to choose a different network (another testnet, or mainnet) by overriding the `.env` file.
+
+`.env.sample` is a sample environment file that allows overriding default configuration defined in `docker-compose.yml`. Uncomment and set any variable to override its value.
+
+Setup the desired inputs for the DV, including the network you wish to operate on. Check the [Charon CLI reference](../charon/charon-cli-reference.md) for additional optional flags to set. Once you have set the values you wish to use. Make a copy of this file called `.env`.
+
+```bash
+# Copy ".env.sample", renaming it ".env"
+cp .env.sample .env
+```
+
+:::
diff --git a/docs/versioned_docs/version-v0.19.1/start/quickstart_group.md b/docs/versioned_docs/version-v0.19.1/start/quickstart_group.md
new file mode 100644
index 0000000000..04e8c63bc5
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/start/quickstart_group.md
@@ -0,0 +1,269 @@
+---
+sidebar_position: 4
+description: Create a DV with a group
+---
+
+# quickstart\_group
+
+import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem";
+
+## Create a DV with a group
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+This quickstart guide will walk you through creating a Distributed Validator Cluster with a number of other node operators.
+
+### Pre-requisites
+
+* A basic{" "} [knowledge ](https://docs.ethstaker.cc/ethstaker-knowledge-base/){" "} of Ethereum nodes and validators.
+* Ensure you have{" "} [git ](https://git-scm.com/downloads){" "} installed.
+* Ensure you have{" "} [docker ](https://docs.docker.com/engine/install/){" "} installed.{" "}
+* Make sure `docker` is running before executing the commands below.
+
+\
+
+
+### Step 1: Generate an ENR
+
+In order to prepare for a distributed key generation ceremony, you need to create an ENR for your charon client. This ENR is a public/private key pair that allows the other charon clients in the DKG to identify and connect to your node. If you are creating a cluster but not taking part as a node operator in it, you can skip this step.
+
+```bash
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node/
+
+# Use docker to create an ENR. Backup the file `.charon/charon-enr-private-key`.
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.1 create enr
+```
+
+You should expect to see a console output like this:
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony nor start the DV cluster successfully.** :::
+
+:::tip If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update your docker permissions](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.1/int/faq/errors/README.md#docker-permission-denied-error) to allow the command to run successfully. :::
+
+For the next step, select the _Creator_ tab if you are coordinating the creation of the cluster. (This role holds no position of privilege in the cluster, it only sets the initial terms of the cluster that the other operators agree to.) Select the _Operator_ tab if you are accepting an invitation to operate a node in a cluster proposed by the cluster creator.
+
+### Step 2: Create a cluster or accept an invitation to a cluster
+
+### Collect addresses, configure the cluster, share the invitation
+
+Before starting the cluster creation process, you will need to collect an Ethereum address for each operator in the cluster. They will need to be able to sign messages through MetaMask with this address. _(Broader wallet support will be added in future.)_ With these addresses in hand, go through the cluster creation flow.
+
+You will use the Launchpad to create an invitation, and share it with the operators.\
+This video shows the flow within the{" "} [DV Launchpad ](https://github.com/ObolNetwork/obol-docs/blob/main/docs/dvl/intro/README.md#dv-launchpad-links):
+
+The following are the steps for creating a cluster.
+
+1. Go to the{" "} [DV Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/docs/dvl/intro/README.md#dv-launchpad-links)
+2. Connect your wallet 
+3. Select `Create a Cluster with a group` then{" "} `Get Started`. 
+4. Follow the flow and accept the advisories.
+5. Configure the Cluster
+6.
+ * Input the `Cluster Name` & `Cluster Size`{" "} (i.e. number of operators in the cluster). The threshold will update automatically, it shows the number of nodes that need to be functioning for the validator(s) to stay active.
+7. Input the Ethereum addresses for each operator that you collected previously. If you will be taking part as an operator, click the "Use My Address" button for Operator 1.
+8.
+ * Select the desired amount of validators (32 ETH each) the cluster will run. (Note that the mainnet launchpad is restricted to one validator for now.)
+ * If you are taking part in the cluster, enter the ENR you generated in [step one](quickstart_group.md#step-1-generate-an-enr) in the "What is your charon client's ENR?" field.
+ * Enter the `Principal address` which should receive the principal 32 ETH and the accrued consensus layer rewards when the validator is exited. This can optionally be set to the contract address of a multisig / splitter contract.
+ * Enter the `Fee Recipient address` to which the execution layer rewards will go. This can be the same as the principal address, or it can be a different address. This can optionally be set to the contract address of a multisig / splitter contract.
+9. Click `Create Cluster Configuration`. Review that all the details are correct, and press `Confirm and Sign`{" "} You will be prompted to sign two or three transactions with your MetaMask wallet. These are:
+10.
+ * The `config_hash`. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+ * The `operator_config_hash`. This is your acceptance of the terms and conditions of participating as a node operator.
+ * Your `ENR`. Signing your ENR authorises the corresponding private key to act on your behalf in the cluster.
+11. Share your cluster invite link with the operators. Following the link will show you a screen waiting for other operators to accept the configuration you created. 
+12. You can use the link to monitor how many of the operators have already signed their approval of the cluster configuration and submitted their ENR.
+
+You will use the CLI to create the cluster definition file, which you will distribute it to the operators manually.
+
+1. The leader or creator of the cluster will prepare the `cluster-definition.json` file for the Distributed Key Generation ceremony using the `charon create dkg` command.
+2. Populate the `charon create dkg` command with the appropriate flags including the `name`, the{" "} `num-validators`, the{" "} `fee-recipient-addresses`, the{" "} `withdrawal-addresses`, and the{" "} `operator-enrs` of all the operators participating in the cluster.
+3. Run the `charon create dkg` command that generates DKG cluster-definition.json file. (Note: in the "docker run" command, you may have to change the version from v0.19.0 to the correct version of the repo you are using)
+
+ ```
+ docker run --rm -v "$(pwd):/opt/charon"
+ obolnetwork/charon:v0.19.0 create dkg --name="Quickstart"
+ --num-validators=1
+ --fee-recipient-addresses="0x0000000000000000000000000000000000000000"
+ --withdrawal-addresses="0x0000000000000000000000000000000000000000"
+ --operator-enrs="enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u"
+
+ ```
+
+ This command should output a file at `.charon/cluster-definition.json` This file needs to be shared with the other operators in a cluster.
+
+ * The `.charon` folder is hidden by default. To view it, run `ls -al .charon` in your terminal. Else, if you are on `macOS`, press{" "} `Cmd + Shift + .` to view all hidden files in the finder application.
+
+### Join the cluster prepared by the creator
+
+Use the Launchpad or CLI to join the cluster configuration generated by the creator: Your cluster creator needs to configure the cluster, and send you an invite URL link to join the cluster on the Launchpad. Once you've received the Launchpad invite link, you can begin the cluster acceptance process.
+
+1. Click on the DV launchpad link provided by the leader or creator. Make sure you recognise the domain and the person sending you the link, to ensure you are not being phished.
+2. Connect your wallet using the Ethereum address provided to the leader. 
+3. Review the operators addresses submitted and click `Get Started` to continue. 
+4. Review and accept the DV Launchpad terms & conditions and advisories.
+5. Review the cluster configuration set by the creator and add your `ENR` that you generated in [step 1](quickstart_group.md#step-1-generate-an-enr).
+6. Sign the two transactions with your wallet, these are:
+ * The config hash. This is a hashed representation of all of the details for this cluster.
+ * Your own `ENR` This signature authorises the key represented by this ENR to act on your behalf in the cluster.
+7. Wait for all the other operators in your cluster to also finish these steps.
+
+You'll receive the `cluster-definition.json` file created by the leader/creator. You should save it in the `.charon/`{" "} folder that was created initially. (Alternatively, you can use the{" "} `--definition-file` flag to override the default expected location for this file.)
+
+Once every participating operator is ready, the next step is the distributed key generation amongst the operators.
+
+* If you are not planning on operating a node, and were only configuring the cluster for the operators, your journey ends here. Well done!
+* If you are one of the cluster operators, continue to the next step.
+
+### Step 3: Run the Distributed Key Generation (DKG) ceremony
+
+:::tip For the [DKG](../charon/dkg.md) to complete, all operators need to be running the command simultaneously. It helps if operators can agreed on a certain time or schedule a video call for them to all run the command together. :::
+
+1. Once all operators successfully signed, your screen will automatically advance to the next step and look like this. Click `Continue`. (If you closed the tab, you can always go back to the invite link shared by the leader and connect your wallet.)
+
+
+
+2. Copy and run the `docker` command on the screen into your terminal. It will retrieve the remote cluster details and begin the DKG process.
+
+ 
+3. Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder of the node. These include:
+ * A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+ * A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+ * A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+Once the creator gives you the `cluster-definition.json` file and you place it in a `.charon` subdirectory, run:
+
+```
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.0 dkg --publish
+```
+
+and the DKG process should begin.
+
+:::warning Please make sure to create a backup of your `.charon/` folder. **If you lose your private keys you won't be able to start the DV cluster successfully and may risk your validator deposit becoming unrecoverable.** Ensure every operator has their `.charon` folder securely and privately backed up before activating any validators. :::
+
+:::info The `cluster-lock` and `deposit-data` files are identical for each operator, if lost, they can be copied from one operator to another. :::
+
+Now that the DKG has been completed, all operators can start their nodes.
+
+### Step 4: Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term.
+
+The quickstart [repository](https://github.com/ObolNetwork/charon-distributed-validator-node) is configured to sync an execution layer client (`Nethermind`) and a consensus layer client (`Lighthouse`). You can also leverage alternative ways to run a node such as Ansible, Helm, or Kubernetes manifests.
+
+:::info Currently, the quickstart [repo](https://github.com/ObolNetwork/charon-distributed-validator-node) configures a node for the Holesky testnet. It is possible to choose a different network (another testnet, or mainnet) by overriding the `.env` file. From within the `charon-distributed-validator-node` directory:
+
+`.env.sample` is a sample environment file that allows overriding default configuration defined in `docker-compose.yml`. Uncomment and set any variable to override its value.
+
+Setup the desired inputs for the DV, including the network you wish to operate on. Check the [Charon CLI reference](../charon/charon-cli-reference.md) for additional optional flags to set.
+
+```bash
+# Copy ".env.sample", renaming it ".env"
+cp .env.sample .env
+```
+
+:::
+
+:::warning If you manually update `docker compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.\
+
+
+**Note**: If you have a `nethermind` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/nethermind`. This makes everything faster since you start from a synced nethermind node. :::
+
+```bash
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up -d
+
+```
+
+If at any point you need to turn off your node, you can run:
+
+```bash
+# Shut down the currently running distributed validator node
+docker compose down
+```
+
+You should use the grafana dashboard that accompanies the quickstart repo to see whether your cluster is healthy.
+
+```bash
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+In particular you should check:
+
+* That your charon client can connect to the configured beacon client.
+* That your charon client can connect to all peers directly.
+* That your validator client is connected to charon, and has the private keys it needs loaded and accessible.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually \~16 hours after the deposit is made).
+
+Use an ansible playbook to start your node. [See the repo here](https://github.com/ObolNetwork/obol-ansible) for further instructions. Use a Helm to start your node. [See the repo here](https://github.com/ObolNetwork/helm-charts) for further instructions. Use Kubernetes manifests to start your charon client and validator client. These manifests expect an existing Beacon Node Endpoint to connect to. [See the repo here](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node) for further instructions.
+
+**Using a pre-existing beacon node**
+
+:::warning Using a remote beacon node will impact the performance of your Distributed Validator and should be used sparingly. :::
+
+If you already have a beacon node running somewhere and you want to use that instead of running an EL (`nethermind`) & CL (`lighthouse`) as part of the example repo, you can disable these images. To do so, follow these steps:
+
+1. Copy the `docker-compose.override.yml.sample` file
+
+```
+cp -n docker-compose.override.yml.sample docker-compose.override.yml
+```
+
+2. Uncomment the `profiles: [disable]` section for both `nethermind` and `lighthouse`. The override file should now look like this
+
+```
+services:
+ nethermind:
+ # Disable nethermind
+ profiles: [disable]
+ # Bind nethermind internal ports to host ports
+ #ports:
+ #- 8545:8545 # JSON-RPC
+ #- 8551:8551 # AUTH-RPC
+ #- 6060:6060 # Metrics
+
+ lighthouse:
+ # Disable lighthouse
+ profiles: [disable]
+ # Bind lighthouse internal ports to host ports
+ #ports:
+ #- 5052:5052 # HTTP
+ #- 5054:5054 # Metrics
+...
+```
+
+3. Then, uncomment and set the `CHARON_BEACON_NODE_ENDPOINTS` variable in the `.env` file to your beacon node's URL
+
+```
+...
+# Connect to one or more external beacon nodes. Use a comma separated list excluding spaces.
+CHARON_BEACON_NODE_ENDPOINTS=
+...
+```
+
+4. Restart your docker compose
+
+```
+docker compose down
+docker compose up -d
+```
+
+:::tip In a Distributed Validator Cluster, it is important to have a low latency connection to your peers. Charon clients will use the NAT protocol to attempt to establish a direct connection to one another automatically. If this doesn't happen, you should port forward charon's p2p port to the public internet to facilitate direct connections. (The default port to expose is `:3610`). Read more about charon's networking [here](../charon/networking.md). :::
+
+If you have gotten to this stage, every node is up, synced and connected, congratulations. You can now move forward to activating your validator to begin staking.
diff --git a/docs/versioned_docs/version-v0.19.1/start/quickstart_overview.md b/docs/versioned_docs/version-v0.19.1/start/quickstart_overview.md
new file mode 100644
index 0000000000..e139b21b1f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/start/quickstart_overview.md
@@ -0,0 +1,19 @@
+---
+sidebar_position: 1
+description: Quickstart Overview
+---
+
+# Quickstart Overview
+
+The quickstart guides are aimed at developers and stakers looking to utilize Distributed Validators for solo or multi-operator staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+There are two ways to set up a distributed validator and each comes with its own quickstart, within the "Getting Started" section:
+1. Run a DV cluster as a [**group**](./quickstart_group.md), where several operators run the nodes that make up the cluster. In this setup, the key shares are created using a distributed key generation process, avoiding the full private keys being stored in full in any one place.
+This approach can also be used by single operators looking to manage all nodes of a cluster but wanting to create the key shares in a trust-minimised fashion.
+
+2. Run a DV cluster [**alone**](./quickstart_alone.md), where a single operator runs all the nodes of the DV. Depending on trust assumptions, there is not necessarily the need to create the key shares via a DKG process. Instead the key shares can be created in a centralised manner, and distributed securely to the nodes.
+
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.19.1/start/update.md b/docs/versioned_docs/version-v0.19.1/start/update.md
new file mode 100644
index 0000000000..2cb7acae51
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.1/start/update.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 6
+description: Update your DV cluster with the latest Charon release
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Update a DV
+
+It is highly recommended to upgrade your DV stack from time to time. This ensures that your node is secure, performant, up-to-date and you don't miss important hard forks.
+
+To do this, follow these steps:
+
+### Navigate to the node directory
+
+
+
+
+
+ cd charon-distributed-validator-node
+
+
+
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+
+
+
+### Pull latest changes to the repo
+```
+git pull
+```
+
+### Create (or recreate) your DV stack
+```
+docker compose up -d --build
+```
+:::warning
+If you run more than one node in a DV Cluster, please take caution upgrading them simultaneously. Particularly if you are updating or changing the validator client used or recreating disks. It is recommended to update nodes on a sequential basis to minimse liveness and safety risks.
+:::
+
+### Conflicts
+
+:::info
+You may get a `git conflict` error similar to this:
+:::
+```markdown
+error: Your local changes to the following files would be overwritten by merge:
+prometheus/prometheus.yml
+...
+Please commit your changes or stash them before you merge.
+```
+This is probably because you have made some changes to some of the files, for example to the `prometheus/prometheus.yml` file.
+
+To resolve this error, you can either:
+
+- Stash and reapply changes if you want to keep your custom changes:
+ ```
+ git stash # Stash your local changes
+ git pull # Pull the latest changes
+ git stash apply # Reapply your changes from the stash
+ ```
+ After reapplying your changes, manually resolve any conflicts that may arise between your changes and the pulled changes using a text editor or Git's conflict resolution tools.
+
+- Override changes and recreate configuration if you don't need to preserve your local changes and want to discard them entirely:
+ ```
+ git reset --hard # Discard all local changes and override with the pulled changes
+ docker-compose up -d --build # Recreate your DV stack
+ ```
+ After overriding the changes, you will need to recreate your DV stack using the updated files.
+ By following one of these approaches, you should be able to handle Git conflicts when pulling the latest changes to your repository, either preserving your changes or overriding them as per your requirements.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.2/README.md b/docs/versioned_docs/version-v0.19.2/README.md
new file mode 100644
index 0000000000..76263c2219
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/README.md
@@ -0,0 +1,2 @@
+# version-v0.19.2
+
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/README.md b/docs/versioned_docs/version-v0.19.2/advanced/README.md
new file mode 100644
index 0000000000..965416d689
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/README.md
@@ -0,0 +1,2 @@
+# advanced
+
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/adv-docker-configs.md b/docs/versioned_docs/version-v0.19.2/advanced/adv-docker-configs.md
new file mode 100644
index 0000000000..d14de53e8b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/adv-docker-configs.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 8
+description: Use advanced docker-compose features to have more flexibility and power to change the default configuration.
+---
+
+# Advanced Docker Configs
+
+:::info
+This section is intended for *docker power users*, i.e., for those who are familiar with working with `docker-compose` and want to have more flexibility and power to change the default configuration.
+:::
+
+We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in `docker-compose.yml` without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
+See [this](https://docs.docker.com/compose/extends/#multiple-compose-files) for more details.
+
+There are some additional compose files in [this repository](https://github.com/ObolNetwork/charon-distributed-validator-node/), `compose-debug.yml` and `docker-compose.override.yml.sample`, along-with the default `docker-compose.yml` file that you can use for this purpose.
+
+- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f compose-debug.yml up
+```
+
+- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
+
+- To use it, just copy the sample file to `docker-compose.override.yml` and customise it to your liking. Please create this file ONLY when you want to tweak something. This is because the default override file is empty and docker errors if you provide an empty compose file.
+
+```
+cp docker-compose.override.yml.sample docker-compose.override.yml
+
+# Tweak docker-compose.override.yml and then run docker compose up
+docker compose up
+```
+
+- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
+
+```
+docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
+```
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/monitoring.md b/docs/versioned_docs/version-v0.19.2/advanced/monitoring.md
new file mode 100644
index 0000000000..fdbec169b9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/monitoring.md
@@ -0,0 +1,100 @@
+---
+sidebar_position: 4
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+# Getting Started Monitoring your Node
+
+Welcome to this comprehensive guide, designed to assist you in effectively monitoring your Charon cluster and nodes, and setting up alerts based on specified parameters.
+
+## Pre-requisites
+
+Ensure the following software are installed:
+
+- Docker: Find the installation guide for Ubuntu **[here](https://docs.docker.com/engine/install/ubuntu/)**
+- Prometheus: You can install it using the guide available **[here](https://prometheus.io/docs/prometheus/latest/installation/)**
+- Grafana: Follow this **[link](https://grafana.com/docs/grafana/latest/setup-grafana/installation/)** to install Grafana
+
+## Import Pre-Configured Charon Dashboards
+
+- Navigate to the **[repository](https://github.com/ObolNetwork/monitoring/tree/main/dashboards)** that contains a variety of Grafana dashboards. For this demonstration, we will utilize the Charon Dashboard json.
+
+- In your Grafana interface, create a new dashboard and select the import option.
+
+- Copy the content of the Charon Dashboard json from the repository and paste it into the import box in Grafana. Click "Load" to proceed.
+
+- Finalize the import by clicking on the "Import" button. At this point, your dashboard should begin displaying metrics. Ensure your Charon client and Prometheus are operational for this to occur.
+
+## Example Alerting Rules
+
+To create alerts for Node-Exporter, follow these steps based on the sample rules provided on the "Awesome Prometheus alerts" page:
+
+1. Visit the **[Awesome Prometheus alerts](https://samber.github.io/awesome-prometheus-alerts/rules.html#host-and-hardware)** page. Here, you will find lists of Prometheus alerting rules categorized by hardware, system, and services.
+
+2. Depending on your need, select the category of alerts. For example, if you want to set up alerts for your system's CPU usage, click on the 'CPU' under the 'Host & Hardware' category.
+
+3. On the selected page, you'll find specific alert rules like 'High CPU Usage'. Each rule will provide the PromQL expression, alert name, and a brief description of what the alert does. You can copy these rules.
+
+4. Paste the copied rules into your Prometheus configuration file under the `rules` section. Make sure you understand each rule before adding it to avoid unnecessary alerts.
+
+5. Finally, save and apply the configuration file. Prometheus should now trigger alerts based on these rules.
+
+
+For alerts specific to Charon/Alpha, refer to the alerting rules available on this [ObolNetwork/monitoring](https://github.com/ObolNetwork/monitoring/tree/main/alerting-rules).
+
+## Understanding Alert Rules
+
+1. `ClusterBeaconNodeDown`This alert is activated when the beacon node in a specified Alpha cluster is offline. The beacon node is crucial for validating transactions and producing new blocks. Its unavailability could disrupt the overall functionality of the cluster.
+2. `ClusterBeaconNodeSyncing`This alert indicates that the beacon node in a specified Alpha cluster is synchronizing, i.e., catching up with the latest blocks in the cluster.
+3. `ClusterNodeDown`This alert is activated when a node in a specified Alpha cluster is offline.
+4. `ClusterMissedAttestations`:This alert indicates that there have been missed attestations in a specified Alpha cluster. Missed attestations may suggest that validators are not operating correctly, compromising the security and efficiency of the cluster.
+5. `ClusterInUnknownStatus`: This alert is designed to activate when a node within the cluster is detected to be in an unknown state. The condition is evaluated by checking whether the maximum of the app_monitoring_readyz metric is 0.
+6. `ClusterInsufficientPeers`:This alert is set to activate when the number of peers for a node in the Alpha M1 Cluster #1 is insufficient. The condition is evaluated by checking whether the maximum of the **`app_monitoring_readyz`** equals 4.
+7. `ClusterFailureRate`: This alert is activated when the failure rate of the Alpha M1 Cluster #1 exceeds a certain threshold.
+8. `ClusterVCMissingValidators`: This alert is activated if any validators in the Alpha M1 Cluster #1 are missing.
+9. `ClusterHighPctFailedSyncMsgDuty`: This alert is activated if a high percentage of sync message duties failed in the cluster. The alert is activated if the sum of the increase in failed duties tagged with "sync_message" in the last hour divided by the sum of the increase in total duties tagged with "sync_message" in the last hour is greater than 0.1.
+10. `ClusterNumConnectedRelays`: This alert is activated if the number of connected relays in the cluster falls to 0.
+11. PeerPingLatency: 1. This alert is activated if the 90th percentile of the ping latency to the peers in a cluster exceeds 500ms within 2 minutes.
+
+## Best Practices for Monitoring Charon Nodes & Cluster
+
+- **Establish Baselines**: Familiarize yourself with the normal operation metrics like CPU, memory, and network usage. This will help you detect anomalies.
+- **Define Key Metrics**: Set up alerts for essential metrics, encompassing both system-level and Charon-specific ones.
+- **Configure Alerts**: Based on these metrics, set up actionable alerts.
+- **Monitor Network**: Regularly assess the connectivity between nodes and the network.
+- **Perform Regular Health Checks**: Consistently evaluate the status of your nodes and clusters.
+- **Monitor System Logs**: Keep an eye on logs for error messages or unusual activities.
+- **Assess Resource Usage**: Ensure your nodes are neither over- nor under-utilized.
+- **Automate Monitoring**: Use automation to ensure no issues go undetected.
+- **Conduct Drills**: Regularly simulate failure scenarios to fine-tune your setup.
+- **Update Regularly**: Keep your nodes and clusters updated with the latest software versions.
+
+## Third-Party Services for Uptime Testing
+
+- [updown.io](https://updown.io/)
+- [Grafana synthetic Monitoring](https://grafana.com/grafana/plugins/grafana-synthetic-monitoring-app/)
+
+## Key metrics to watch to verify node health based on jobs
+
+- Node Exporter:
+
+**CPU Usage**: High or spiking CPU usage can be a sign of a process demanding more resources than it should.
+
+**Memory Usage**: If a node is consistently running out of memory, it could be due to a memory leak or simply under-provisioning.
+
+**Disk I/O**: Slow disk operations can cause applications to hang or delay responses. High disk I/O can indicate storage performance issues or a sign of high load on the system.
+
+**Network Usage**: High network traffic or packet loss can signal network configuration issues, or that a service is being overwhelmed by requests.
+
+**Disk Space**: Running out of disk space can lead to application errors and data loss.
+
+**Uptime**: The amount of time a system has been up without any restarts. Frequent restarts can indicate instability in the system.
+
+**Error Rates**: The number of errors encountered by your application. This could be 4xx/5xx HTTP errors, exceptions, or any other kind of error your application may log.
+
+**Latency**: The delay before a transfer of data begins following an instruction for its transfer.
+
+It is also important to check:
+
+- NTP clock skew
+- Process restarts and failures (eg. through `node_systemd`)
+- alert on high error and panic log counts.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/obol-monitoring.md b/docs/versioned_docs/version-v0.19.2/advanced/obol-monitoring.md
new file mode 100644
index 0000000000..993103c2cb
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/obol-monitoring.md
@@ -0,0 +1,47 @@
+---
+sidebar_position: 5
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+
+# Push Metrics to Obol Monitoring
+
+:::info
+This is **optional** and does not confer any special privileges within the Obol Network.
+:::
+
+You may have been provided with **Monitoring Credentials** used to push distributed validator metrics to Obol's central prometheus cluster to monitor, analyze, and improve your Distributed Validator Cluster's performance.
+
+The provided credentials needs to be added in `prometheus/prometheus.yml` replacing `$PROM_REMOTE_WRITE_TOKEN` and will look like:
+```
+obol20tnt8UC...
+```
+
+The updated `prometheus/prometheus.yml` file should look like:
+```yaml
+global:
+ scrape_interval: 30s # Set the scrape interval to every 30 seconds.
+ evaluation_interval: 30s # Evaluate rules every 30 seconds.
+
+remote_write:
+ - url: https://vm.monitoring.gcp.obol.tech/write
+ authorization:
+ credentials: obol20tnt8UC-your-credential-here...
+ write_relabel_configs:
+ - source_labels: [job]
+ regex: "charon"
+ action: keep # Keeps charon metrics and drop metrics from other containers.
+
+scrape_configs:
+ - job_name: "nethermind"
+ static_configs:
+ - targets: ["nethermind:8008"]
+ - job_name: "lighthouse"
+ static_configs:
+ - targets: ["lighthouse:5054"]
+ - job_name: "charon"
+ static_configs:
+ - targets: ["charon:3620"]
+ - job_name: "lodestar"
+ static_configs:
+ - targets: [ "lodestar:5064" ]
+```
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/quickstart-builder-api.md b/docs/versioned_docs/version-v0.19.2/advanced/quickstart-builder-api.md
new file mode 100644
index 0000000000..569062ca65
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/quickstart-builder-api.md
@@ -0,0 +1,163 @@
+---
+sidebar_position: 2
+description: Run a distributed validator cluster with the builder API (MEV-Boost)
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Enable MEV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+This quickstart guide focuses on configuring the builder API for Charon and supported validator and consensus clients.
+
+## Getting started with Charon & the Builder API
+
+Running a distributed validator cluster with the builder API enabled will give the validators in the cluster access to the builder network. This builder network is a network of "Block Builders"
+who work with MEV searchers to produce the most valuable blocks a validator can propose.
+
+[MEV-Boost](https://boost.flashbots.net/) is one such product from flashbots that enables you to ask multiple
+block relays (who communicate with the "Block Builders") for blocks to propose. The block that pays the largest reward to the validator will be signed and returned to the relay for broadcasting to the wider
+network. The end result for the validator is generally an increased APR as they receive some share of the MEV.
+
+:::info
+Before completing this guide, please check your cluster version, which can be found inside the `cluster-lock.json` file. If you are using cluster-lock version `1.7.0` or higher release verions, Obol seamlessly accommodates all validator client implementations within a mev-enabled distributed validator cluster.
+
+For clusters with a cluster-lock version `1.6.0` and below, charon is compatible only with [Teku](https://github.com/ConsenSys/teku). Use the version history feature of this documentation to see the instructions for configuring a cluster in that manner (`v0.16.0`).
+:::
+
+## Client configuration
+
+:::note
+You need to add CLI flags to your consensus client, charon client, and validator client, to enable the builder API.
+
+You need all operators in the cluster to have their nodes properly configured to use the builder API, or you risk missing a proposal.
+:::
+
+### Charon
+
+Charon supports builder API with the `--builder-api` flag. To use builder API, one simply needs to add this flag to the `charon run` command:
+
+```
+charon run --builder-api
+```
+
+### Consensus Clients
+
+The following flags need to be configured on your chosen consensus client. A flashbots relay URL is provided for example purposes, you should choose a relay that suits your preferences from [this list](https://github.com/eth-educators/ethstaker-guides/blob/main/MEV-relay-list.md#mev-relay-list-for-mainnet).
+
+
+
+ Teku can communicate with a single relay directly:
+
+
+ {String.raw`--builder-endpoint="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`--builder-endpoint=http://mev-boost:18550`}
+
+
+
+
+ Lighthouse can communicate with a single relay directly:
+
+
+ {String.raw`lighthouse bn --builder "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`lighthouse bn --builder "http://mev-boost:18550"`}
+
+
+
+
+
+
+ {String.raw`prysm beacon-chain --http-mev-relay "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+
+ {String.raw`--payload-builder=true --payload-builder-url="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ You should also consider adding --local-block-value-boost 3
as a flag, to favour locally built blocks if they are withing 3% in value of the relay block, to improve the chances of a successful proposal.
+
+
+
+
+ {String.raw`--builder --builder.urls "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+### Validator Clients
+
+The following flags need to be configured on your chosen validator client
+
+
+
+
+
+ {String.raw`teku validator-client --validators-builder-registration-default-enabled=true`}
+
+
+
+
+
+
+
+ {String.raw`lighthouse vc --builder-proposals`}
+
+
+
+
+
+
+ {String.raw`prysm validator --enable-builder`}
+
+
+
+
+
+
+ {String.raw`--payload-builder=true`}
+
+
+
+
+
+
+ {String.raw`--builder="true" --builder.selection="builderonly"`}
+
+
+
+
+
+## Verify your cluster is correctly configured
+
+It can be difficult to confirm everything is configured correctly with your cluster until a proposal opportunity arrives, but here are some things you can check.
+
+When your cluster is running, you should see if charon is logging something like this each epoch:
+```
+13:10:47.094 INFO bcast Successfully submitted validator registration to beacon node {"delay": "24913h10m12.094667699s", "pubkey": "84b_713", "duty": "1/builder_registration"}
+```
+
+This indicates that your charon node is successfully registering with the relay for a blinded block when the time comes.
+
+If you are using the [ultrasound relay](https://relay.ultrasound.money), you can enter your cluster's distributed validator public key(s) into their website, to confirm they also see the validator as correctly registered.
+
+You should check that your validator client's logs look healthy, and ensure that you haven't added a `fee-recipient` address that conflicts with what has been selected by your cluster in your cluster-lock file, as that may prevent your validator from producing a signature for the block when the opportunity arises. You should also confirm the same for all of the other peers in your cluster.
+
+Once a proposal has been made, you should look at the `Block Extra Data` field under `Execution Payload` for the block on [Beaconcha.in](https://beaconcha.in/block/18450364), and confirm there is text present, this generally suggests the block came from a builder, and was not a locally constructed block.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/quickstart-combine.md b/docs/versioned_docs/version-v0.19.2/advanced/quickstart-combine.md
new file mode 100644
index 0000000000..d18c1919ea
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/quickstart-combine.md
@@ -0,0 +1,112 @@
+---
+sidebar_position: 9
+description: Combine distributed validator private key shares to recover the validator private key.
+---
+
+# Combine DV private key shares
+
+:::warning
+Reconstituting Distributed Validator private key shares into a standard validator private key is a security risk, and can potentially cause your validator to be slashed.
+
+Only combine private keys as a last resort and do so with extreme caution.
+:::
+
+Combine distributed validator private key shares into an Ethereum validator private key.
+
+## Pre-requisites
+
+- Ensure you have the `.charon` directories of at least a threshold of the cluster's node operators.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Set up the key combination directory tree
+
+Rename each cluster node operator `.charon` directory in a different way to avoid folder name conflicts.
+
+We suggest naming them clearly and distinctly, to avoid confusion.
+
+At the end of this process, you should have a tree like this:
+
+```shell
+$ tree ./cluster
+
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+...
+└── node*
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+:::warning
+Make sure to never mix the various `.charon` directories with one another.
+
+Doing so can potentially cause the combination process to fail.
+:::
+
+## Step 2. Combine the key shares
+
+Run the following command:
+
+```sh
+# Combine a clusters private keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.2 combine --cluster-dir /opt/charon/cluster --output-dir /opt/charon/combined
+```
+
+This command will store the combined keys in the `output-dir`, in this case a folder named `combined`.
+
+```shell
+$ tree combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+
+We can verify that the directory names are correct by looking at the lock file:
+
+```shell
+$ jq .distributed_validators[].distributed_public_key cluster/node0/cluster-lock.json
+"0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd"
+"0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106"
+```
+
+:::info
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+Ensure your distributed validator cluster is completely shut down before starting a replacement validator or you are likely to be slashed.
+:::
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/quickstart-sdk.md b/docs/versioned_docs/version-v0.19.2/advanced/quickstart-sdk.md
new file mode 100644
index 0000000000..96213cbc65
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/quickstart-sdk.md
@@ -0,0 +1,133 @@
+---
+sidebar_position: 1
+description: Create a DV cluster using the Obol Typescript SDK
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Create a DV using the SDK
+
+:::warning
+The Obol-SDK is in a beta state and should be used with caution on testnets only.
+:::
+
+This is a walkthrough of using the [Obol-SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk) to propose a four-node distributed validator cluster for creation using the [DV Launchpad](../dvl/intro.md).
+
+## Pre-requisites
+
+- You have [node.js](https://nodejs.org/en) installed.
+
+## Install the package
+
+Install the Obol-SDK package into your development environment
+
+
+
+
+ npm install --save @obolnetwork/obol-sdk
+
+
+
+
+ yarn add @obolnetwork/obol-sdk
+
+
+
+
+## Instantiate the client
+
+The first thing you need to do is create a instance of the Obol SDK client. The client takes two constructor parameters:
+
+- The `chainID` for the chain you intend to use.
+- An ethers.js [signer](https://docs.ethers.org/v6/api/providers/#Signer-signTypedData) object.
+
+```ts
+import { Client } from "@obolnetwork/obol-sdk";
+import { ethers } from "ethers";
+
+// Create a dummy ethers signer object with a throwaway private key
+const mnemonic = ethers.Wallet.createRandom().mnemonic?.phrase || "";
+const privateKey = ethers.Wallet.fromPhrase(mnemonic).privateKey;
+const wallet = new ethers.Wallet(privateKey);
+const signer = wallet.connect(null);
+
+// Instantiate the Obol Client for goerli
+const obol = new Client({ chainId: 5 }, signer);
+```
+
+## Propose the cluster
+
+List the Ethereum addresses of participating operators, along with withdrawal and fee recipient address data for each validator you intend for the operators to create.
+
+```ts
+// A config hash is a deterministic hash of the proposed DV cluster configuration
+const configHash = await obol.createClusterDefinition({
+ name: "SDK Demo Cluster",
+ operators: [
+ { address: "0xC35CfCd67b9C27345a54EDEcC1033F2284148c81" },
+ { address: "0x33807D6F1DCe44b9C599fFE03640762A6F08C496" },
+ { address: "0xc6e76F72Ea672FAe05C357157CfC37720F0aF26f" },
+ { address: "0x86B8145c98e5BD25BA722645b15eD65f024a87EC" },
+ ],
+ validators: [
+ {
+ fee_recipient_address: "0x3CD4958e76C317abcEA19faDd076348808424F99",
+ withdrawal_address: "0xE0C5ceA4D3869F156717C66E188Ae81C80914a6e",
+ },
+ ],
+});
+
+console.log(
+ `Direct the operators to https://goerli.launchpad.obol.tech/dv?configHash=${configHash} to complete the key generation process`
+);
+```
+
+## Invite the Operators to complete the DKG
+
+Once the Obol-API returns a `configHash` string from the `createClusterDefinition` method, you can use this identifier to invite the operators to the [Launchpad](../dvl/intro.md) to complete the process
+
+1. Operators navigate to `https://.launchpad.obol.tech/dv?configHash=` and complete the [run a DV with others](../start/quickstart_group.md) flow.
+1. Once the DKG is complete, and operators are using the `--publish` flag, the created cluster details will be posted to the Obol API
+1. The creator will be able to retrieve this data with `obol.getClusterLock(configHash)`, to use for activating the newly created validator.
+
+## Retrieve the created Distributed Validators using the SDK
+
+Once the DKG is complete, the proposer of the cluster can retrieve key data such as the validator public keys and their associated deposit data messages.
+
+```js
+const clusterLock = await obol.getClusterLock(configHash);
+```
+
+Reference lock files can be found [here](https://github.com/ObolNetwork/charon/tree/main/cluster/testdata).
+
+## Activate the DVs using the deposit contract
+
+In order to activate the distributed validators, the cluster operator can retrieve the validators' associated deposit data from the lock file and use it to craft transactions to the `deposit()` method on the deposit contract.
+
+```js
+const validatorDepositData =
+ clusterLock.distributed_validators[validatorIndex].deposit_data;
+
+const depositContract = new ethers.Contract(
+ DEPOSIT_CONTRACT_ADDRESS, // 0x00000000219ab540356cBB839Cbe05303d7705Fa for Mainnet, 0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b for Goerli
+ depositContractABI, // https://etherscan.io/address/0x00000000219ab540356cBB839Cbe05303d7705Fa#code for Mainnet, and replace the address for Goerli
+ signer
+);
+
+const TX_VALUE = ethers.parseEther("32");
+
+const tx = await depositContract.deposit(
+ validatorDepositData.pubkey,
+ validatorDepositData.withdrawal_credentials,
+ validatorDepositData.signature,
+ validatorDepositData.deposit_data_root,
+ { value: TX_VALUE }
+);
+
+const txResult = await tx.wait();
+```
+
+## Usage Examples
+
+Examples of how our SDK can be used are found [here](https://github.com/ObolNetwork/obol-sdk-examples).
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/quickstart-split.md b/docs/versioned_docs/version-v0.19.2/advanced/quickstart-split.md
new file mode 100644
index 0000000000..9299a8a7ed
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/quickstart-split.md
@@ -0,0 +1,93 @@
+---
+sidebar_position: 3
+description: Split existing validator keys
+---
+
+# Split existing validator private keys
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+
+This process should only be used if you want to split an *existing validator private key* into multiple private key shares for use in a Distributed Validator Cluster. If your existing validator is not properly shut down before the Distributed Validator starts, your validator may be slashed.
+
+If you are starting a new validator, you should follow a [quickstart guide](../start/quickstart_overview.md) instead.
+
+If you use MEV-Boost, make sure you turned off your MEV-Boost service for the time of splitting the keys, otherwise you may hit [this issue](https://github.com/ObolNetwork/charon/issues/2770).
+:::
+
+Split an existing Ethereum validator key into multiple key shares for use in an [Obol Distributed Validator Cluster](../int/key-concepts.md#distributed-validator-cluster).
+
+
+## Pre-requisites
+
+- Ensure you have the existing validator keystores (the ones to split) and passwords.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Clone the charon repo and copy existing keystore files
+
+Clone the [charon](https://github.com/ObolNetwork/charon) repo.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon.git
+
+ # Change directory
+ cd charon/
+
+ # Create a folder within this checked out repo
+ mkdir split_keys
+ ```
+
+Copy the existing validator `keystore.json` files into this new folder. Alongside them, with a matching filename but ending with `.txt` should be the password to the keystore. E.g., `keystore-0.json` `keystore-0.txt`
+
+At the end of this process, you should have a tree like this:
+```shell
+├── split_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ ├── keystore-1.txt
+│ ...
+│ ├── keystore-*.json
+│ ├── keystore-*.txt
+```
+
+## Step 2. Split the keys using the charon docker command
+
+Run the following docker command to split the keys:
+
+```shell
+CHARON_VERSION= # E.g. v0.19.2
+CLUSTER_NAME= # The name of the cluster you want to create.
+WITHDRAWAL_ADDRESS= # The address you want to use for withdrawals (this is just for accuracy in your lock file, you can't change a withdrawal address for a validator that has already been deposited)
+FEE_RECIPIENT_ADDRESS= # The address you want to use for block reward and MEV payments.
+NODES= # The number of nodes in the cluster.
+
+docker run --rm -v $(pwd):/opt/charon obolnetwork/charon:${CHARON_VERSION} create cluster --name="${CLUSTER_NAME}" --withdrawal-addresses="${WITHDRAWAL_ADDRESS}" --fee-recipient-addresses="${FEE_RECIPIENT_ADDRESS}" --split-existing-keys --split-keys-dir=/opt/charon/split_keys --nodes ${NODES} --network mainnet
+```
+
+The above command will create `validator_keys` along with `cluster-lock.json` in `./cluster` for each node.
+
+Command output:
+
+```shell
+***************** WARNING: Splitting keys **********************
+ Please make sure any existing validator has been shut down for
+ at least 2 finalised epochs before starting the charon cluster,
+ otherwise slashing could occur.
+****************************************************************
+
+Created charon cluster:
+ --split-existing-keys=true
+
+./cluster/
+├─ node[0-*]/ Directory for each node
+│ ├─ charon-enr-private-key Charon networking private key for node authentication
+│ ├─ cluster-lock.json Cluster lock defines the cluster lock file which is signed by all nodes
+│ ├─ validator_keys Validator keystores and password
+│ │ ├─ keystore-*.json Validator private share key for duty signing
+│ │ ├─ keystore-*.txt Keystore password files for keystore-*.json
+```
+
+These split keys can now be used to start a charon cluster.
diff --git a/docs/versioned_docs/version-v0.19.2/advanced/self-relay.md b/docs/versioned_docs/version-v0.19.2/advanced/self-relay.md
new file mode 100644
index 0000000000..7af35ee894
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/advanced/self-relay.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 7
+description: Self-host a relay
+---
+
+# Self-Host a Relay
+
+If you are experiencing connectivity issues with the Obol hosted relays, or you want to improve your clusters latency and decentralization, you can opt to host your own relay on a separate open and static internet port.
+
+```
+# Figure out your public IP
+curl v4.ident.me
+
+# Clone the repo and cd into it.
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+cd charon-distributed-validator-node
+
+# Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname # Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname
+
+nano relay/docker-compose.yml
+
+docker compose -f relay/docker-compose.yml up
+```
+
+Test whether the relay is publicly accessible. This should return an ENR:
+`curl http://replace.with.public.ip.or.hostname:3640/enr`
+
+Ensure the ENR returned by the relay contains the correct public IP and port by decoding it with https://enr-viewer.com/.
+
+Configure **ALL** charon nodes in your cluster to use this relay:
+
+- Either by adding a flag: `--p2p-relays=http://replace.with.public.ip.or.hostname:3640/enr`
+- Or by setting the environment variable: `CHARON_P2P_RELAYS=http://replace.with.public.ip.or.hostname:3640/enr`
+
+Note that a local `relay/.charon/charon-enr-private-key` file will be created next to `relay/docker-compose.yml` to ensure a persisted relay ENR across restarts.
+
+A list of publicly available relays that can be used is maintained [here](../faq/risks.md).
diff --git a/docs/versioned_docs/version-v0.19.2/cf/README.md b/docs/versioned_docs/version-v0.19.2/cf/README.md
new file mode 100644
index 0000000000..5e4947f1b9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/cf/README.md
@@ -0,0 +1,2 @@
+# cf
+
diff --git a/docs/versioned_docs/version-v0.19.2/cf/bug-report.md b/docs/versioned_docs/version-v0.19.2/cf/bug-report.md
new file mode 100644
index 0000000000..9a10b3b553
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/cf/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hindrance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualize the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behavior
+
+
+## Current Behavior
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.19.2/cf/feedback.md b/docs/versioned_docs/version-v0.19.2/cf/feedback.md
new file mode 100644
index 0000000000..76042e28aa
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/cf/feedback.md
@@ -0,0 +1,5 @@
+# Feedback
+
+If you have followed our quickstart guides, and whether you succeeded or failed at running the distributed validator successfully, we would like to hear your feedback on the process and where you encountered difficulties.
+- Please let us know by joining and posting on our [Discord](https://discord.gg/n6ebKsX46w).
+- Also, feel free to add issues to our [GitHub repos](https://github.com/ObolNetwork).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.2/charon/README.md b/docs/versioned_docs/version-v0.19.2/charon/README.md
new file mode 100644
index 0000000000..44b46f1797
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/charon/README.md
@@ -0,0 +1,2 @@
+# charon
+
diff --git a/docs/versioned_docs/version-v0.19.2/charon/charon-cli-reference.md b/docs/versioned_docs/version-v0.19.2/charon/charon-cli-reference.md
new file mode 100644
index 0000000000..37be91456a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/charon/charon-cli-reference.md
@@ -0,0 +1,361 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+sidebar_position: 5
+---
+
+# CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.19.2`](https://github.com/ObolNetwork/charon/releases/tag/v0.19.2). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+The following are the top-level commands available to use.
+
+```markdown
+charon --help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ alpha Alpha subcommands provide early access to in-development features
+ combine Combines the private key shares of a distributed validator cluster into a set of standard validator private keys.
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Prints a new ENR for this node
+ help Help about any command
+ relay Start a libp2p relay server
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+## The `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+```
+
+### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+```
+
+### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster-lock.json` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --cluster-dir string The target folder to create the cluster in. (default "./")
+ --definition-file string Optional path to a cluster definition file or an HTTP URL. This overrides all other configuration flags.
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for cluster
+ --insecure-keys Generates insecure keystore files. This should never be used. It is not supported on mainnet.
+ --keymanager-addresses strings Comma separated list of keymanager URLs to import validator key shares to. Note that multiple addresses are required, one for each node in the cluster, with node0's keyshares being imported to the first address, node1's keyshares to the second, and so on.
+ --keymanager-auth-tokens strings Authentication bearer tokens to interact with the keymanager URLs. Don't include the "Bearer" symbol, only include the api-token.
+ --name string The cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky.
+ --nodes int The number of charon nodes in the cluster. Minimum is 3.
+ --num-validators int The number of distributed validators needed in the cluster.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version of the custom test network (in hex).
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+ --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, frost (default "default")
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky. (default "mainnet")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings [REQUIRED] Comma-separated list of each operator's Charon ENR address.
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+## The `dkg` subcommand
+
+### Performing a DKG Ceremony
+
+The `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit data for each new distributed validator. The command outputs the `cluster-lock.json` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file or an HTTP URL. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --keymanager-address string The keymanager URL to import validator keyshares.
+ --keymanager-auth-token string Authentication bearer token to interact with keymanager API. Don't include the "Bearer" symbol, only include the api-token.
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --shutdown-delay duration Graceful shutdown delay. (default 1s)
+```
+
+## The `run` subcommand
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster-lock.json` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs.
+ --builder-api Enables the builder api. Will only produce builder blocks. Builder API must also be enabled on the validator client. Beacon node must be connected to a builder-relay to access the builder network.
+ --debug-address string Listening address (ip and port) for the pprof and QBFT debug API.
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing.
+ --jaeger-service string Service name used for jaeger tracing. (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --manifest-file string The path to the cluster manifest file. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-manifest.pb")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus). (default "127.0.0.1:3620")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --private-key-file-lock Enables private key locking to prevent multiple instances using the same key.
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-beacon-mock-fuzz Configures simnet beaconmock to return fuzzed responses.
+ --simnet-slot-duration duration Configures slot duration in simnet beacon mock. (default 1s)
+ --simnet-validator-keys-dir string The directory containing the simnet validator key shares. (default ".charon/validator_keys")
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --synthetic-block-proposals Enables additional synthetic block proposal duties. Used for testing of rare duties.
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version in hex of the custom test network.
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API. (default "127.0.0.1:3600")
+```
+
+## The `combine` subcommand
+
+### Combine distributed validator key shares into a single Validator key
+
+The `combine` command combines many validator key shares into a single Ethereum validator key.
+
+```markdown
+charon combine --help
+Combines the private key shares from a threshold of operators in a distributed validator cluster into a set of validator private keys that can be imported into a standard Ethereum validator client.
+
+Warning: running the resulting private keys in a validator alongside the original distributed validator cluster *will* result in slashing.
+
+Usage:
+ charon combine [flags]
+
+Flags:
+ --cluster-dir string Parent directory containing a number of .charon subdirectories from the required threshold of nodes in the cluster. (default ".charon/cluster")
+ --force Overwrites private keys with the same name if present.
+ -h, --help Help for combine
+ --no-verify Disables cluster definition and lock file verification.
+ --output-dir string Directory to output the combined private keys to. (default "./validator_keys")
+```
+
+To run this command, one needs at least a threshold number of node operator's `.charon` directories, which need to be organized into a single folder:
+
+```shell
+tree ./cluster
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+That is, each operator '.charon' directory must be placed in a parent directory, and renamed to avoid conflicts.
+
+If for example the lock file defines 2 validators, each `validator_keys` directory must contain exactly 4 files, a JSON and TXT file for each validator.
+
+Those files must be named with an increasing index associated with the validator in the lock file, starting from 0.
+
+The chosen folder name does not matter, as long as it's different from `.charon`.
+
+At the end of the process `combine` will create a new set of directories containing one validator key each, named after its public key:
+
+```shell
+charon combine --cluster-dir="./cluster" --output-dir="./combined"
+tree ./combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+By default, the `combine` command will refuse to overwrite any private key that is already present in the destination directory.
+
+To force the process, use the `--force` flag.
+
+:::warning
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/EIPs/blob/master/EIPS/eip-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+**Ensure your distributed validator cluster is completely shut down for at least two epochs before starting a replacement validator or you are likely to be slashed.**
+:::
+
+## Host a relay
+
+Relays run a libp2p [circuit relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) server that allows charon clusters to perform peer discovery and for charon clients behind strict NAT gateways to be communicated with. If you want to self-host a relay for your cluster(s) the following command will start one.
+
+```markdown
+charon relay --help
+Starts a libp2p relay that charon nodes can use to bootstrap their p2p cluster
+
+Usage:
+ charon relay [flags]
+
+Flags:
+ --auto-p2pkey Automatically create a p2pkey (secp256k1 private key used for p2p authentication and ENR) if none found in data directory. (default true)
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for relay
+ --http-address string Listening address (ip and port) for the relay http server serving runtime ENR. (default "127.0.0.1:3640")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --monitoring-address string Listening address (ip and port) for the prometheus and pprof monitoring http server. (default "127.0.0.1:3620")
+ --p2p-advertise-private-addresses Enable advertising of libp2p auto-detected private addresses. This doesn't affect manually provided p2p-external-ip/hostname.
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-max-connections int Libp2p maximum number of peers that can connect to this relay. (default 16384)
+ --p2p-max-reservations int Updates max circuit reservations per peer (each valid for 30min) (default 512)
+ --p2p-relay-loglevel string Libp2p circuit relay log level. E.g., debug, info, warn, error.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+```
+You can also consider adding [alternative public relays](../faq/risks.md) to your cluster by specifying a list of `p2p-relays` in [`charon run`](#run-the-charon-middleware).
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.2/charon/cluster-configuration.md b/docs/versioned_docs/version-v0.19.2/charon/cluster-configuration.md
new file mode 100644
index 0000000000..5160727369
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/charon/cluster-configuration.md
@@ -0,0 +1,161 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+sidebar_position: 3
+---
+
+# Cluster configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client or cluster.
+
+A charon cluster is configured in two steps:
+
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+In the case of a solo operator running a cluster, the [`charon create cluster`](./charon-cli-reference.md#create-a-full-cluster-locally) command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+## Cluster Definition File
+
+The `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+### Using the CLI
+
+The [`charon create dkg`](./charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) command is used to create the `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The schema of the `cluster-definition.json` is defined as:
+
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "creator": {
+ "address": "0x123..abfc", //ETH1 address of the creator
+ "config_signature": "0x123654...abcedf" // EIP712 Signature of config_hash using creator privkey
+ },
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "config_signature": "0x123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "0x123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.2.0", // Schema version
+ "timestamp": "2022-01-01T12:00:00+00:00", // Creation timestamp
+ "num_validators": 2, // Number of distributed validators to be created in cluster-lock.json
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "validators": [
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ },
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ }
+ ],
+ "dkg_algorithm": "foo_dkg_v1", // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "0xabcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "0xabcdef...abcedef" // Final hash of all fields
+}
+```
+
+### Using the DV Launchpad
+
+- A `leader/creator`, that wishes to coordinate the creation of a new Distributed Validator Cluster navigates to the launchpad and selects "Create new Cluster"
+- The `leader/creator` uses the user interface to configure all of the important details about the cluster including:
+ - The `Withdrawal Address` for the created validators
+ - The `Fee Recipient Address` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialized and merklized to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that there is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the `leader/creator` is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralized backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralization of the launchpad.)
+
+## Cluster Lock File
+
+The `cluster-lock.json` has the following schema:
+
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equal to cluster_definition.num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "abc...fed", "cfd...bfe"], // Length equal to cluster_definition.operators
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
+
+## Cluster Size and Resilience
+
+The cluster size (the number of nodes/operators in the cluster) determines the resilience of the cluster; its ability remain operational under diverse failure scenarios.
+Larger clusters can tolerate more faulty nodes.
+However, increased cluster size implies higher operational costs and potential network latency, which may negatively affect performance
+
+Optimal cluster size is therefore trade-off between resilience (larger is better) vs cost-efficiency and performance (smaller is better).
+
+Cluster resilience can be broadly classified into two categories:
+ - **[Byzantine Fault Tolerance (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault)** - the ability to tolerate nodes that are actively trying to disrupt the cluster.
+ - **[Crash Fault Tolerance (CFT)](https://en.wikipedia.org/wiki/Fault_tolerance)** - the ability to tolerate nodes that have crashed or are otherwise unavailable.
+
+Different cluster sizes tolerate different counts of byzantine vs crash nodes.
+In practice, hardware and software crash relatively frequently, while byzantine behaviour is relatively uncommon.
+However, Byzantine Fault Tolerance is crucial for trust minimised systems like distributed validators.
+Thus, cluster size can be chosen to optimise for either BFT or CFT.
+
+The table below lists different cluster sizes and their characteristics:
+ - `Cluster Size` - the number of nodes in the cluster.
+ - `Threshold` - the minimum number of nodes that must collaborate to reach consensus quorum and to create signatures.
+ - `BFT #` - the maximum number of byzantine nodes that can be tolerated.
+ - `CFT #` - the maximum number of crashed nodes that can be tolerated.
+
+| Cluster Size | Threshold | BFT # | CFT # | Note |
+|--------------|-----------|-------|-------|------------------------------------|
+| 1 | 1 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 2 | 2 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 3 | 2 | 0 | 1 | ⚠️ Warning: CFT but not BFT! |
+| 4 | 3 | 1 | 1 | ✅ CFT and BFT optimal for 1 faulty |
+| 5 | 4 | 1 | 1 | |
+| 6 | 4 | 1 | 2 | ✅ CFT optimal for 2 crashed |
+| 7 | 5 | 2 | 2 | ✅ BFT optimal for 2 byzantine |
+| 8 | 6 | 2 | 2 | |
+| 9 | 6 | 2 | 3 | ✅ CFT optimal for 3 crashed |
+| 10 | 7 | 3 | 3 | ✅ BFT optimal for 3 byzantine |
+| 11 | 8 | 3 | 3 | |
+| 12 | 8 | 3 | 4 | ✅ CFT optimal for 4 crashed |
+| 13 | 9 | 4 | 4 | ✅ BFT optimal for 4 byzantine |
+| 14 | 10 | 4 | 4 | |
+| 15 | 10 | 4 | 5 | ✅ CFT optimal for 5 crashed |
+| 16 | 11 | 5 | 5 | ✅ BFT optimal for 5 byzantine |
+| 17 | 12 | 5 | 5 | |
+| 18 | 12 | 5 | 6 | ✅ CFT optimal for 6 crashed |
+| 19 | 13 | 6 | 6 | ✅ BFT optimal for 6 byzantine |
+| 20 | 14 | 6 | 6 | |
+| 21 | 14 | 6 | 7 | ✅ CFT optimal for 7 crashed |
+| 22 | 15 | 7 | 7 | ✅ BFT optimal for 7 byzantine |
+
+The table above is determined by the QBFT consensus algorithm with the
+following formulas from [this](https://arxiv.org/pdf/1909.10194.pdf) paper:
+
+```
+n = cluster size
+
+Threshold: min number of honest nodes required to reach quorum given size n
+Quorum(n) = ceiling(2n/3)
+
+BFT #: max number of faulty (byzantine) nodes given size n
+f(n) = floor((n-1)/3)
+
+CFT #: max number of unavailable (crashed) nodes given size n
+crashed(n) = n - Quorum(n)
+```
diff --git a/docs/versioned_docs/version-v0.19.2/charon/dkg.md b/docs/versioned_docs/version-v0.19.2/charon/dkg.md
new file mode 100644
index 0000000000..bcea7c64b0
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/charon/dkg.md
@@ -0,0 +1,73 @@
+---
+description: Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+sidebar_position: 2
+---
+
+# Distributed Key Generation
+
+## Overview
+
+A [**distributed validator key**](../int/key-concepts.md#distributed-validator-key) is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together (4 randomly chosen points on a graph don't all necessarily sit on the same order three curve). To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a [**distributed key generation ceremony**](../int/key-concepts.md#distributed-validator-key-generation-ceremony).
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](../charon/cluster-configuration.md).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign a message with this address to authorize their charon client to take part in the DKG ceremony.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p/tree/master/p2p/security/noise). These keys need to be created (and backed up) by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This cluster definition specifies the intended cluster configuration before keys have been created in a distributed key generation ceremony. The `cluster-definition.json` file can be created with the help of the [Distributed Validator Launchpad](./cluster-configuration.md#using-the-dv-launchpad) or via the [CLI](./cluster-configuration.md#using-the-cli).
+
+## Carrying out the DKG ceremony
+
+Once all participants have signed the cluster definition, they can load the `cluster-definition` file into their charon client, and the client will attempt to complete the DKG.
+
+Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to relays that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen by a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+## Backing up the ceremony artifacts
+
+At the end of a DKG ceremony, each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favor of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it does not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../charon/cluster-configuration.md).
diff --git a/docs/versioned_docs/version-v0.19.2/charon/intro.md b/docs/versioned_docs/version-v0.19.2/charon/intro.md
new file mode 100644
index 0000000000..e53940651d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/charon/intro.md
@@ -0,0 +1,69 @@
+---
+sidebar_position: 1
+description: Charon - The Distributed Validator Client
+---
+
+# Introduction
+
+This section introduces and outlines the Charon _\[kharon]_ middleware, Obol's implementation of DVT. Please see the [key concepts](../int/key-concepts.md) section as background and context.
+
+## What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and its connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+
+
+## Charon Architecture
+
+Charon is an Ethereum proof of stake distributed validator (DV) client. Like any validator client, its main purpose is to perform validation duties for the Beacon Chain, primarily attestations and block proposals. The beacon client handles a lot of the heavy lifting, leaving the validator client to focus on fetching duty data, signing that data, and submitting it back to the beacon client.
+
+Charon is designed as a generic event-driven workflow with different components coordinating to perform validation duties. All duties follow the same flow, the only difference being the signed data. The workflow can be divided into phases consisting of one or more components:
+
+
+
+### Determine **when** duties need to be performed
+
+The beacon chain is divided into [slots](https://eth2book.info/bellatrix/part3/config/types/#slot) and [epochs](https://eth2book.info/bellatrix/part3/config/types/#epoch), which divides it into deterministically fixed-size time chunks. The first step is to determine when (which slot/epoch) duties need to be performed. This is done by the `scheduler` component. It queries the beacon node to detect which validators defined in the cluster lock are active, and what duties they need to perform for the upcoming epoch and slots. When such a slot starts, the `scheduler` emits an event indicating which validator needs to perform what duty.
+
+### Fetch and come to consensus on **what** data to sign
+
+A DV cluster consists of multiple operators each provided with one of the M-of-N threshold BLS private key shares per validator. The key shares are imported into the validator clients which produce partial signatures. Charon threshold aggregates these partial signatures before broadcasting them to the Beacon Chain. _But to threshold aggregate partial signatures, each validator must sign the same data._ The cluster must therefore coordinate and come to a consensus on what data to sign.
+
+`Fetcher` fetches the unsigned duty data from the beacon node upon receiving an event from `Scheduler`.\
+For attestations, this is the unsigned attestation, for block proposals, this is the unsigned block.
+
+The `Consensus` component listens to events from Fetcher and starts a [QBFT](https://docs.goquorum.consensys.net/configure-and-manage/configure/consensus-protocols/qbft/) consensus game with the other Charon nodes in the cluster for that specific duty and slot. When consensus is reached, the resulting unsigned duty data is stored in the `DutyDB`.
+
+### **Wait** for the VC to sign
+
+Charon is a **middleware** distributed validator client. That means Charon doesn’t have access to the validator private key shares and cannot sign anything on demand. Instead, operators import the key shares into industry-standard validator clients (VC) that are configured to connect to their local Charon client instead of their local Beacon node directly.
+
+Charon, therefore, serves the [Ethereum Beacon Node API](https://ethereum.github.io/beacon-APIs/#/) from the `ValidatorAPI` component and intercepts some endpoints while proxying other endpoints directly to the upstream Beacon node.
+
+The VC queries the `ValidatorAPI` for unsigned data which is retrieved from the `DutyDB`. It then signs it and submits it back to the `ValidatorAPI` which stores it in the `PartialSignatureDB`.
+
+### **Share** partial signatures
+
+The `PartialSignatureDB` stores the partially signed data submitted by the local Charon client’s VC. But it also stores all the partial signatures submitted by the VCs of other peers in the cluster. This is achieved by the `PartialSignatureExchange` component that exchanges partial signatures between all peers in the cluster. All charon clients, therefore, store all partial signatures the cluster generates.
+
+### **Threshold Aggregate** partial signatures
+
+The `SignatureAggregator` is invoked as soon as sufficient (any M of N) partial signatures are stored in the `PartialSignatureDB`. It performs BLS threshold aggregation of the partial signatures resulting in a final signature that is valid for the beacon chain.
+
+### **Broadcast** final signature
+
+Finally, the `Broadcaster` component broadcasts the final threshold aggregated signature to the Beacon client, thereby completing the duty.
+
+### Ports
+
+The following is an outline of the services that can be exposed by charon.
+
+* **:3600** - The validator REST API. This is the port that serves the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/). This is the port validator clients should talk to instead of their standard consensus client REST API port. Charon subsequently proxies these requests to the upstream consensus client specified by `--beacon-node-endpoints`.
+* **:3610** - Charon P2P port. This is the port that charon clients use to communicate with one another via TCP. This endpoint should be port-forwarded on your router and exposed publicly, preferably on a static IP address. This IP address should then be set on the charon run command with `--p2p-external-ip` or `CHARON_P2P_EXTERNAL_IP`.
+* **:3620** - Monitoring port. This port hosts a webserver that serves prometheus metrics on `/metrics`, a readiness endpoint on `/readyz` and a liveness endpoint on `/livez`, and a pprof server on `/debug/pprof`. This port should not be exposed publicly.
+
+## Getting started
+
+For more information on running charon, take a look at our [Quickstart Guides](../start/quickstart_overview.md).
diff --git a/docs/versioned_docs/version-v0.19.2/charon/networking.md b/docs/versioned_docs/version-v0.19.2/charon/networking.md
new file mode 100644
index 0000000000..076981a5c4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/charon/networking.md
@@ -0,0 +1,84 @@
+---
+sidebar_position: 4
+description: Networking
+---
+
+# Charon networking
+
+## Overview
+
+This document describes Charon's networking model which can be divided into two parts: the [_internal validator stack_](networking.md#internal-validator-stack) and the [_external p2p network_](networking.md#external-p2p-network).
+
+## Internal Validator Stack
+
+\
+
+
+Charon is a middleware DVT client and is therefore connected to an upstream beacon node and a downstream validator client is connected to it. Each operator should run the whole validator stack (all 4 client software types), either on the same machine or on different machines. The networking between the nodes should be private and not exposed to the public internet.
+
+Related Charon configuration flags:
+
+* `--beacon-node-endpoints`: Connects Charon to one or more beacon nodes.
+* `--validator-api-address`: Address for Charon to listen on and serve requests from the validator client.
+
+## External P2P Network
+
+ The Charon clients in a DV cluster are connected to each other via a small p2p network consisting of only the clients in the cluster. Peer IP addresses are discovered via an external "relay" server. The p2p connections are over the public internet so the charon p2p port must be publicly accessible. Charon leverages the popular [libp2p](https://libp2p.io/) protocol.
+
+Related [Charon configuration flags](charon-cli-reference.md):
+
+* `--p2p-tcp-addresses`: Addresses for Charon to listen on and serve p2p requests.
+* `--p2p-relays`: Connect charon to one or more relay servers.
+* `--private-key-file`: Private key identifying the charon client.
+
+### LibP2P Authentication and Security
+
+Each charon client has a secp256k1 private key. The associated public key is encoded into the [cluster lock file](cluster-configuration.md#Cluster-Lock-File) to identify the nodes in the cluster. For ease of use and to align with the Ethereum ecosystem, Charon encodes these public keys in the [ENR format](https://eips.ethereum.org/EIPS/eip-778), not in [libp2p’s Peer ID format](https://docs.libp2p.io/concepts/fundamentals/peers/).
+
+:::warning Each Charon node's secp256k1 private key is critical for authentication and must be kept secure to prevent cluster compromise.
+
+Do not use the same key across multiple clusters, as this can lead to security issues.
+
+For more on p2p security, refer to [libp2p's article](https://docs.libp2p.io/concepts/security/security-considerations). :::
+
+Charon currently only supports libp2p tcp connections with [noise](https://noiseprotocol.org/) security and only accepts incoming libp2p connections from peers defined in the cluster lock.
+
+### LibP2P Relays and Peer Discovery
+
+Relays are simple libp2p servers that are publicly accessible supporting the [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) protocol. Circuit-relay is a libp2p transport protocol that routes traffic between two peers over a third-party “relay” peer.
+
+Obol hosts a publicly accessible relay at https://0.relay.obol.tech and will work with other organisations in the community to host alternatives Anyone can host their own relay server for their DV cluster.
+
+Each charon node knows which peers are in the cluster from the ENRs in the cluster lock file, but their IP addresses are unknown. By connecting to the same relay, nodes establish “relay connections” to each other. Once connected via relay they exchange their known public addresses via libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol. The relay connection is then upgraded to a direct connection. If a node’s public IP changes, nodes once again connect via relay, exchange the new IP, and then connect directly once again.
+
+Note that in order for two peers to discover each other, they must connect to the same relay. Cluster operators should therefore coordinate which relays to use.
+
+Libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol attempts to automatically detect the public IP address of a charon client without the need to explicitly configure it. If this however fails, the following two configuration flags can be used to explicitly set the publicly advertised address:
+
+* `--p2p-external-ip`: Explicitly sets the external IP address.
+* `--p2p-external-hostname`: Explicitly sets the external DNS host name.
+
+:::warning If a pair of charon clients are not publicly accessible, due to being behind a NAT, they will not be able to upgrade their relay connections to a direct connection. Even though this is supported, it isn’t recommended as relay connections introduce additional latency and reduced throughput and will result in decreased validator effectiveness and possible missed block proposals and attestations. :::
+
+Libp2p’s circuit-relay connections are end-to-end encrypted, even though relay servers accept connections between nodes from multiple different clusters, relays are merely routing opaque connections. And since Charon only accepts incoming connections from other peers in its cluster, the use of a relay doesn’t allow connections between clusters.
+
+Only the following three libp2p protocols are established between a charon node and a relay itself:
+
+* [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/): To establish relay e2e encrypted connections between two peers in a cluster.\
+
+* [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify): Auto-detection of public IP addresses to share with other peers in the cluster.\
+
+* [peerinfo](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfo.go): Exchanges basic application [metadata](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfopb/v1/peerinfo.proto) for improved operational metrics and observability.\
+
+
+All other charon protocols are only established between nodes in the same cluster.
+
+### Scalable Relay Clusters
+
+In order for a charon client to connect to a relay, it needs the relay's [multiaddr](https://docs.libp2p.io/concepts/fundamentals/addressing/) (containing its public key and IP address). But a single multiaddr can only point to a single relay server which can easily be overloaded if too many clusters connect to it. Charon therefore supports resolving a relay’s multiaddr via HTTP GET request. Since charon also includes the unique `cluster-hash` header in this request, the relay provider can use [consistent header-based load-balancing](https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_steering_header-based_routing) to map clusters to one of many relays using a single HTTP address.
+
+The relay supports serving its runtime public multiaddrs via its `--http-address` flag.
+
+E.g., https://0.relay.obol.tech is actually a load-balancer that routes HTTP requests to one of many relays based on the `cluster-hash` header returning the target relay’s multiaddr which the charon client then uses to connect to that relay.
+
+The charon `--p2p-relays` flag therefore supports both multiaddrs as well as HTTP URls.
diff --git a/docs/versioned_docs/version-v0.19.2/dvl/README.md b/docs/versioned_docs/version-v0.19.2/dvl/README.md
new file mode 100644
index 0000000000..1b694a8473
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/dvl/README.md
@@ -0,0 +1,2 @@
+# dvl
+
diff --git a/docs/versioned_docs/version-v0.19.2/dvl/intro.md b/docs/versioned_docs/version-v0.19.2/dvl/intro.md
new file mode 100644
index 0000000000..99959aff23
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/dvl/intro.md
@@ -0,0 +1,27 @@
+---
+sidebar_position: 6
+description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# DV Launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: **The DV Launchpad**.
+
+## Getting started
+
+For more information on running charon in a UI friendly way through the DV Launchpad, take a look at our [Quickstart Guides](../start/quickstart_overview.md).
+
+## DV Launchpad Links
+
+| Ethereum Network | Launchpad |
+| ---------------- | ----------------------------------- |
+| Mainnet | https://beta.launchpad.obol.tech |
+| Holesky | https://holesky.launchpad.obol.tech |
+| Sepolia | https://sepolia.launchpad.obol.tech |
+| Goerli | https://goerli.launchpad.obol.tech |
diff --git a/docs/versioned_docs/version-v0.19.2/faq/README.md b/docs/versioned_docs/version-v0.19.2/faq/README.md
new file mode 100644
index 0000000000..456ad9139a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/faq/README.md
@@ -0,0 +1,2 @@
+# faq
+
diff --git a/docs/versioned_docs/version-v0.19.2/faq/dkg_failure.md b/docs/versioned_docs/version-v0.19.2/faq/dkg_failure.md
new file mode 100644
index 0000000000..33ffe9c496
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/faq/dkg_failure.md
@@ -0,0 +1,82 @@
+---
+sidebar_position: 4
+description: Handling DKG failure
+---
+
+# Handling DKG failure
+
+While the DKG process has been tested and validated against many different configuration instances, it can still encounter issues which might result in failure.
+
+Our DKG is designed in a way that doesn't allow for inconsistent results: either it finishes correctly for every peer, or it fails.
+
+This is a **safety** feature: you don't want to deposit an Ethereum distributed validator that not every operator is able to participate to, or even reach threshold for.
+
+The most common source of issues lies in the network stack: if any of the peer's Internet connection glitches substantially, the DKG will fail.
+
+Charon's DKG doesn't allow peer reconnection once the process is started, but it does allow for re-connections before that.
+
+When you see the following message:
+
+```
+14:08:34.505 INFO dkg Waiting to connect to all peers...
+```
+
+this means your Charon instance is waiting for all the other cluster peers to start their DKG process: at this stage, peers can disconnect and reconnect at will, the DKG process will still continue.
+
+A log line will confirm the connection of a new peer:
+
+```
+14:08:34.523 INFO dkg Connected to peer 1 of 3 {"peer": "fantastic-adult"}
+14:08:34.529 INFO dkg Connected to peer 2 of 3 {"peer": "crazy-bunch"}
+14:08:34.673 INFO dkg Connected to peer 3 of 3 {"peer": "considerate-park"}
+```
+
+As soon as all the peers are connected, this message will be shown:
+
+```
+14:08:34.924 INFO dkg All peers connected, starting DKG ceremony
+```
+
+Past this stage **no disconnections are allowed**, and _all peers must leave their terminals open_ in order for the DKG process to complete: this is a synchronous phase, and every peer is required in order to reach completion.
+
+If for some reason the DKG process fails, you would see error logs that resemble this:
+
+```
+14:28:46.691 ERRO cmd Fatal error: sync step: p2p connection failed, please retry DKG: context canceled
+```
+
+As the error message suggests, the DKG process needs to be retried.
+
+## Cleaning up the `.charon` directory
+
+One cannot simply retry the DKG process: Charon refuses to overwrite any runtime file in order to avoid inconsistencies and private key loss.
+
+When attempting to re-run a DKG with an unclean data directory -- which is either `.charon` or what was specified with the `--data-dir` CLI parameter -- this is the error that will be shown:
+
+```
+14:44:13.448 ERRO cmd Fatal error: data directory not clean, cannot continue {"disallowed_entity": "cluster-lock.json", "data-dir": "/compose/node0"}
+```
+
+The `disallowed_entity` field lists all the files that Charon refuses to overwrite, while `data-dir` is the full path of the runtime directory the DKG process is using.
+
+In order to retry the DKG process one must delete the following entities, if present:
+
+ - `validator_keys` directory
+ - `cluster-lock.json` file
+ - `deposit-data.json` file
+:::warning
+The `charon-enr-private-key` file **must be preserved**, failure to do so requires the DKG process to be restarted from the beginning by creating a new cluster definition.
+:::
+If you're doing a DKG with a custom cluster definition - for example, create with `charon create dkg` rather than the Obol Launchpad - you can re-use the same file.
+
+Once this process has been completed, the cluster operators can retry a DKG.
+
+## Further debugging
+
+If for some reason the DKG process fails again, node operators are adviced to reach out to the Obol team by opening an [issue](https://github.com/ObolNetwork/charon/issues), detailing what troubleshooting steps were taken and providing **debug logs**.
+
+To enable debug logs first clean up the Charon data directory as explained in [the previous paragraph](#cleaning-up-the-charon-directory), then run your DKG command by appending `--log-level=debug` at the end.
+
+In order for the Obol team to debug your issue as quickly and precisely as possible please provide full logs in textual form, not through screenshots or display photos.
+
+Providing complete logs is particularly important, since it allows the team to reconstruct precisely what happened.
diff --git a/docs/versioned_docs/version-v0.19.2/faq/general.md b/docs/versioned_docs/version-v0.19.2/faq/general.md
new file mode 100644
index 0000000000..77aa3389f4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/faq/general.md
@@ -0,0 +1,121 @@
+---
+sidebar_position: 1
+description: Frequently asked questions
+---
+
+# general
+
+import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem";
+
+## Frequently asked questions
+
+### General
+
+#### Does Obol have a token?
+
+No. Distributed validators use only Ether.
+
+#### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/n6ebKsX46w) too.
+
+#### Where does the name Charon come from?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) \[kharon] is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
+
+#### What are the hardware requirements for running a Charon node?
+
+Charon alone uses negligible disk space of not more than a few MBs. However, if you are running your consensus client and execution client on the same server as charon, then you will typically need the same hardware as running a full Ethereum node:
+
+| | Charon + VC | Beacon Node |
+| ---------------------- | ----------- | ----------- |
+| **CPU\*** | 1 | 2 |
+| **RAM** | 2 | 16 |
+| **Storage** | 100 MB | 2 TB |
+| **Internet Bandwidth** | 10 Mb/s | 10 Mb/s |
+
+| | Charon + VC | Beacon Node |
+| ---------------------- | ----------- | ----------- |
+| **CPU\*** | 2 | 4 |
+| **RAM** | 3 | 24 |
+| **Storage** | 100 MB | 2 TB |
+| **Internet Bandwidth** | 25 Mb/s | 25 Mb/s |
+
+| | Charon + VC | Beacon Node |
+| ---------------------- | ----------- | ----------- |
+| **CPU\*** | 2 | 8 |
+| **RAM** | 4 | 32 |
+| **Storage** | 100 MB | 2 TB |
+| **Internet Bandwidth** | 100 Mb/s | 100 Mb/s |
+
+\*if using vCPU, aim for 2x the above amounts
+
+For more hardware considerations, check out the [ethereum.org guides](https://ethereum.org/en/developers/docs/nodes-and-clients/run-a-node/#environment-and-hardware) which explores various setups and trade-offs, such as running the node locally or in the cloud.
+
+For now, Geth, Teku & Lighthouse clients are packaged within the docker compose file provided in the [quickstart guides](../start/quickstart_overview.md), so you don't have to install anything else to run a cluster. Just make sure you give them some time to sync once you start running your node.
+
+#### What is the difference between a node, a validator and a cluster?
+
+A node is a single instance of Ethereum EL+CL clients that can communicate with other nodes to maintain the Ethereum blockchain.
+
+A validator is a node that participates in the consensus process by verifying transactions and creating new blocks. Multiple validators can run from the same node.
+
+A cluster is a group of nodes that act together as one or several validators, which allows for a more efficient use of resources, reduces operational costs, and provides better reliability and fault tolerance.
+
+#### Can I migrate an existing Charon node to a new machine?
+
+It is possible to migrate your Charon node to another machine running the same config by moving the `.charon` folder with its contents to your new machine. Make sure the EL and CL on the new machine are synced before proceeding to the move to minimize downtime.
+
+### Distributed Key Generation
+
+#### What are the min and max numbers of operators for a Distributed Validator?
+
+Currently, the minimum is 4 operators with a threshold of 3.
+
+The threshold (aka quorum) corresponds to the minimum numbers of operators that need to be active for the validator(s) to be able to perform its duties. It is defined by the following formula `n-(ceil(n/3)-1)`. We strongly recommend using this default threshold in your DKG as it maximises liveness while maintaining BFT safety. Setting a 4 out of 4 cluster for example, would make your validator more vulnerable to going offline instead of less vulnerable. You can check the recommended threshold values for a cluster [here](../int/key-concepts.md).
+
+### Obol Splits
+
+#### What are Obol Splits?
+
+Obol Splits refers to a collection of composable smart contracts that enable the splitting of validator rewards and/or principal in a non-custodial, trust-minimised manner. Obol Splits contains integrations to enable DVs within Lido, Eigenlayer, and in future a number of other LSPs.
+
+#### Are Obol Splits non-custodial?
+
+Yes. Unless you were to decide to [deploy an editable splitter contract](general.md#can-i-change-the-percentages-in-a-split), Obol Splits are immutable, non-upgradeable, non-custodial, and oracle-free.
+
+#### Can I change the percentages in a split?
+
+Generally Obol Splits are deployed in an immutable fashion, meaning you cannot edit the percentages after deployment. However, if you were to choose to deploy a _controllable_ splitter contract when creating your Split, then yes, the address you select as controller can update the split percentages arbitrarily. A common pattern for this use case is to use a Gnosis SAFE as the controller address for the split, giving a group of entities (usually the operators and principal provider) the ability to update the percentages if need be. A well known example of this pattern is the [Protocol Guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html).
+
+#### How do Obol Splits work?
+
+You can read more about how Obol Splits work [here](../sc/introducing-obol-splits.md).
+
+#### Are Obol Splits open source?
+
+Yes, Obol Splits are licensed under GPLv3 and the source code is available [here](https://github.com/ObolNetwork/obol-splits).
+
+#### Are Obol Splits audited?
+
+The Obol Splits contracts have been audited, though further development has continued on the contracts since. Consult the audit results [here](../sec/smart_contract_audit.md).
+
+#### Are the Obol Splits contracts verified on Etherscan?
+
+Yes, you can view the verified contracts on Etherscan. A list of the contract deployments can be found [here](https://github.com/ObolNetwork/obol-splits?#deployment).
+
+#### Does my cold wallet have to call the Obol Splits contracts?
+
+No. Any address can trigger the contracts to move the funds, they do not need to be a member of the Split either. You can set your cold wallet/custodian address as the recipient of the principal and rewards, and use any hot wallet to pay the gas fees to push the ether into the recipient address.
+
+#### Are there any edge cases I should be aware of when using Obol Splits?
+
+The most important decision is to be aware of whether or not the Split contract you are using has been set up with editability. If a splitter is editable, you should understand what the address that can edit the split does. Is the editor an EOA? Who controls that address? How secure is their seed phrase? Is it a smart contract? What can that contract do? Can the controller contract be upgraded? etc. Generally, the safest thing in Obol's perspective is not to have an editable splitter, and if in future you are unhappy with the configuration, that you exit the validator and create a fresh cluster with new settings that fit your needs.
+
+Another aspect to be aware of is how the splitting of principal from rewards works using the Optimistic Withdrawal Recipient contract. There are edge cases relating to not calling the contracts periodically or ahead of a withdrawal, activating more validators than the contract was configured for, and a worst case mass slashing on the network. Consult the documentation on the contract [here](../sc/introducing-obol-splits.md#optimistic-withdrawal-recipient), its audit [here](../sec/smart_contract_audit.md), and follow up with the core team if you have further questions.
+
+### Debugging Errors in Logs
+
+You can check if the containers on your node are outputting errors by running `docker compose logs` on a machine with a running cluster.
+
+Diagnose some common errors and view their resolutions [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/faq/errors.mdx).
diff --git a/docs/versioned_docs/version-v0.19.2/faq/risks.md b/docs/versioned_docs/version-v0.19.2/faq/risks.md
new file mode 100644
index 0000000000..ddb27816f7
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/faq/risks.md
@@ -0,0 +1,41 @@
+---
+sidebar_position: 3
+description: Centralization Risks and mitigation
+---
+
+# Centralization risks and mitigation
+
+## Risk: Obol hosting the relay infrastructure
+**Mitigation**: Self-host a relay
+
+One of the risks associated with Obol hosting the [LibP2P relays](../charon/networking.md) infrastructure allowing peer discovery is that if Obol-hosted relays go down, peers won't be able to discover each other and perform the DKG. To mitigate this risk, external organizations and node operators can consider self-hosting a relay. This way, if Obol's relays go down, the clusters can still operate through other relays in the network. Ensure that all nodes in the cluster use the same relays, or they will not be able to find each other if they are connected to different relays.
+
+The following non-Obol entities run relays that you can consider adding to your cluster (you can have more than one per cluster, see the `--p2p-relays` flag of [`charon run`](../charon/charon-cli-reference.md#the-run-command)):
+
+| Entity | Relay URL |
+|-----------|---------------------------------------|
+| [DSRV](https://www.dsrvlabs.com/) | https://charon-relay.dsrvlabs.dev |
+| [Infstones](https://infstones.com/) | https://obol-relay.infstones.com:3640/ |
+| [Hashquark](https://www.hashquark.io/) | https://relay-2.prod-relay.721.land/ |
+| [Figment](https://figment.io/) | https://relay-1.obol.figment.io/ |
+| [Node Guardians](https://nodeguardians.io/) | https://obol-relay.nodeguardians.io/ |
+
+## Risk: Obol being able to update Charon code
+**Mitigation**: Pin specific docker versions or compile from source on a trusted commit
+
+Another risk associated with Obol is the Labs team having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) used by node operators within DV clusters, which could introduce vulnerabilities or malicious code. To mitigate this risk, operators can consider pinning specific versions of the Docker image or git repo that have been [thoroughly tested](../sec/overview.md#list-of-security-audits-and-assessments) and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community, and only introduced into a running cluster gradually. The labs team will strive to communicate the security or operational impact any charon update entails, giving operators the chance to decide whether they want potential performance or quality of experience improvements, or whether they remain on a trusted version for longer.
+
+## Risk: Obol hosting the DV Launchpad
+**Mitigation**: Use [`create cluster`](../charon/charon-cli-reference.md#the-create-command) or [`create dkg`](../charon/charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) locally and distribute the files manually
+
+Hosting the first Charon frontend, the [DV Launchpad](../dvl/intro.md), on a centralized server could create a single point of failure, as users would have to rely on Obol's server to access the protocol. This could limit the decentralization of the protocol and could make it vulnerable to attacks or downtime. Obol hosting the launchpad on a decentralized network, such as IPFS is a first step but not enough. This is why the Charon code is open-source and contains a CLI interface to interact with the protocol locally.
+
+To mitigate the risk of launchpad failure, consider using the `create cluster` or `create dkg` commands locally and distributing the key shares files manually.
+
+
+## Risk: Obol going bust/rogue
+**Mitigation**: Use key recovery
+
+The final centralization risk associated with Obol is the possibility of the company going bankrupt or acting maliciously, which would lead to a loss of control over the network and potentially cause damage to the ecosystem. To mitigate this risk, Obol has implemented a key recovery mechanism. This would allow the clusters to continue operating and to retrieve full private keys even if Obol is no longer able to provide support.
+
+A guide to recombine key shares into a single private key can be accessed [here](../advanced/quickstart-combine.md).
diff --git a/docs/versioned_docs/version-v0.19.2/fr/README.md b/docs/versioned_docs/version-v0.19.2/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.19.2/fr/ethereum_and_dvt.md b/docs/versioned_docs/version-v0.19.2/fr/ethereum_and_dvt.md
new file mode 100644
index 0000000000..8e7857696c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/fr/ethereum_and_dvt.md
@@ -0,0 +1,54 @@
+---
+sidebar_position: 4
+description: Ethereum and its relationship with DVT
+---
+
+# Ethereum and its Relationship with DVT
+
+Our goal for this page is to equip you with the foundational knowledge needed to actively contribute to the advancement of Obol while also directing you to valuable Ethereum and DVT related resources. Additionally, we will shed light on the intersection of DVT and Ethereum, offering curated articles and blog posts to enhance your understanding.
+
+## **Understanding Ethereum**
+
+To grasp the current landscape of Ethereum's PoS development, we encourage you to delve into the wealth of information available on the [Official Ethereum Website.](https://ethereum.org/en/learn/) The Ethereum website serves as a hub for all things Ethereum, catering to individuals at various levels of expertise, whether you're just starting your journey or are an Ethereum veteran. Here, you'll find a trove of resources that cater to diverse learning needs and preferences, ensuring that there's something valuable for everyone in the Ethereum community to discover.
+
+## **DVT & Ethereum**
+
+### Distributed Validator Technology
+
+> "Distributed validator technology (DVT) is an approach to validator security that spreads out key management and signing responsibilities across multiple parties, to reduce single points of failure, and increase validator resiliency.
+>
+> It does this by splitting the private key used to secure a validator across many computers organized into a "cluster". The benefit of this is that it makes it very difficult for attackers to gain access to the key, because it is not stored in full on any single machine. It also allows for some nodes to go offline, as the necessary signing can be done by a subset of the machines in each cluster. This reduces single points of failure from the network and makes the whole validator set more robust." _(ethereum.org, 2023)_
+
+#### Learn More About Distributed Validator technology from [The Official Ethereum Website](https://ethereum.org/en/staking/dvt/)
+
+### How Does DVT Improve Staking on Ethereum?
+
+If you haven’t yet heard, Distributed Validator Technology, or DVT, is the next big thing on The Merge section of the Ethereum roadmap. Learn more about this in our blog post: [What is DVT and How Does It Improve Staking on Ethereum?](https://blog.obol.tech/what-is-dvt-and-how-does-it-improve-staking-on-ethereum/)
+
+
+
+_**Vitalik's Ethereum Roadmap**_
+
+### Deep Dive Into DVT and Charon’s Architecture
+
+Minimizing correlation is vital when designing DVT as Ethereum Proof of Stake is designed to heavily punish correlated behavior. In designing Obol, we’ve made careful choices to create a trust-minimized and non-correlated architecture.
+
+[**Read more about Designing Non-Correlation Here**](https://blog.obol.tech/deep-dive-into-dvt-and-charons-architecture/)
+
+### Performance Testing Distributed Validators
+
+In our mission to help make Ethereum consensus more resilient and decentralised with distributed validators (DVs), it’s critical that we do not compromise on the performance and effectiveness of validators. Earlier this year, we worked with MigaLabs, the blockchain ecosystem observatory located in Barcelona, to perform an independent test to validate the performance of Obol DVs under different configurations and conditions. After taking a few weeks to fully analyse the results together with MigaLabs, we’re happy to share the results of these performance tests.
+
+[**Read More About The Performance Test Results Here**](https://blog.obol.tech/performance-testing-distributed-validators/)
+
+
+
+### More Resources
+
+* [Sorting out Distributed Validator Technology](https://medium.com/nethermind-eth/sorting-out-distributed-validator-technology-a6f8ca1bbce3)
+* [A tour of Verifiable Secret Sharing schemes and Distributed Key Generation protocols](https://medium.com/nethermind-eth/a-tour-of-verifiable-secret-sharing-schemes-and-distributed-key-generation-protocols-3c814e0d47e1)
+* [Threshold Signature Schemes](https://medium.com/nethermind-eth/threshold-signature-schemes-36f40bc42aca)
+
+#### References
+
+* ethereum.org. (2023). Distributed Validator Technology. \[online] Available at: https://ethereum.org/en/staking/dvt/ \[Accessed 25 Sep. 2023].
diff --git a/docs/versioned_docs/version-v0.19.2/fr/testnet.md b/docs/versioned_docs/version-v0.19.2/fr/testnet.md
new file mode 100644
index 0000000000..d3c0ea558f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/fr/testnet.md
@@ -0,0 +1,122 @@
+---
+sidebar_position: 5
+description: Community testing efforts
+---
+
+# Community Testing
+
+:::tip
+
+This page looks at the community testing efforts organised by Obol to test Distributed Validators at scale. If you are looking for guides to run a Distributed Validator on testnet you can do so [here](../start/quickstart_overview.md).
+
+:::
+
+Over the last number of years, Obol Labs has coordinated and hosted a number of progressively larger testnets to help harden the Charon client and iterate on the key generation tooling.
+
+The following is a breakdown of the testnet roadmap, the features that were to be completed by each testnet, and their completion date and duration.
+
+# Testnets
+
+- [x] [Dev Net 1](#devnet-1)
+- [x] [Dev Net 2](#devnet-2)
+- [x] [Athena Public Testnet 1](#athena-public-testnet-1)
+- [x] [Bia Public Testnet 2](#bia-public-testnet-2)
+
+## Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using `docker compose` to spin up 4 Charon clients and 4 different validator clients on a single machine, with a remote consensus client. The keys were created locally in Charon and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+- A single user completes a first tutorial alone, using `docker compose` to spin up 4 Charon clients on a single machine, with a remote consensus client. The keys are created locally in Charon and activated with the existing launchpad.
+- Prove that the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works.
+- Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to Charon running across a network.
+
+## Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows **together** for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+
+This devnet was the first time `charon dkg` was tested with users. A core focus of this devnet was to collect network performance data.
+
+This was also the first time Charon was run in variable, non-virtual networks (i.e. the real internet).
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+- Groups of 4 testers complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients, each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+- Operators avoid exposing Charon to the public internet on a static IP address through the use of Obol-hosted relay nodes.
+- Users test `charon dkg`. The launchpad is not used, and this dkg is triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+- Effective collection of network performance data, to enable gathering even higher signal performance data at scale during public testnets.
+- Block proposals are in place.
+
+## Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 was to get distributed validators into the hands of the wider Obol Community for the first time. The core focus of this testnet was the onboarding experience.
+
+The core output from this testnet was a significant number of public cluster running and public feedback collected.
+
+This was an unincentivized testnet and formed the basis for us to figure out a Sybil resistance mechanism.
+
+**Participants:** Obol Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Completed date:** October 2022
+
+**Duration:** 2 weeks cluster setup, 8 weeks operation
+
+**Goals:**
+
+- Get distributed validators into the hands of the Obol Early Community for the first time.
+- Create the first public onboarding experience and gather feedback. This is the first time we need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) as possible.
+- Make deploying Ethereum validator nodes accessible using the CLI.
+- Generate a backlog of bugs, feature requests, platform requests and integration requests.
+
+## Bia Public Testnet 2
+
+This second public testnet intends to take the learning from Athena and scale the network by engaging both the wider at-home validator community and professional operators. This is the first time users are setting up DVs using the DV launchpad.
+
+This testnet is also important for learning the conditions Charon will be subjected to in production. A core output of this testnet is a large number of autonomous public DV clusters running and building up the Obol community with technical ambassadors.
+
+**Participants:** Obol Community, Ethereum staking community
+
+**State:** MVP
+
+**Network:** Görli
+
+**Completed date:** March 2023
+
+**Duration:** 2 weeks cluster setup, 4-8 weeks operation
+
+**Goals:**
+
+- Engage the wider Solo and Professional Ethereum Staking Community.
+- Get integration feedback.
+- Build confidence in Charon after running DVs on an Ethereum testnet.
+- Learn about the conditions Charon will be subjected to in production.
+- Distributed Validator returns are competitive versus single validator clients.
+- Make deploying Ethereum validator nodes accessible using the DV Launchpad.
+- Build comprehensive guides for various profiles to spin up DVs with minimal supervision from the core team.
diff --git a/docs/versioned_docs/version-v0.19.2/int/README.md b/docs/versioned_docs/version-v0.19.2/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.19.2/int/key-concepts.md b/docs/versioned_docs/version-v0.19.2/int/key-concepts.md
new file mode 100644
index 0000000000..a7ab566ee6
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/int/key-concepts.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 2
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is possible with the use of **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes some of the single points of failure in validation. Should <33% of the participating nodes in a DV cluster go offline, the remaining active nodes can still come to consensus on what to sign and can produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes Geth, Lighthouse, Charon and Teku.
+
+### Execution Client
+
+
+
+An execution client (formerly known as an Eth1 client) specializes in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/charon/intro/README.md).
+
+### Validator Client
+
+
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Threshold
+
+The number of nodes in a cluster that needs to be online and honest for their distributed validators to be online is outlined in the following table.
+
+| Cluster Size | Threshold | Note |
+| :----------: | :-------: | --------------------------------------------- |
+| 4 | 3/4 | Minimum threshold |
+| 5 | 4/5 | |
+| 6 | 4/6 | Minimum to tolerate two offline nodes |
+| 7 | 5/7 | Minimum to tolerate two **malicious** nodes |
+| 8 | 6/8 | |
+| 9 | 6/9 | Minimum to tolerate three offline nodes |
+| 10 | 7/10 | Minimum to tolerate three **malicious** nodes |
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata. Read more about these ceremonies [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/charon/dkg/README.md).
diff --git a/docs/versioned_docs/version-v0.19.2/int/overview.md b/docs/versioned_docs/version-v0.19.2/int/overview.md
new file mode 100644
index 0000000000..0fad2b0a79
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/int/overview.md
@@ -0,0 +1,55 @@
+---
+sidebar_position: 1
+description: An overview of the Obol network
+---
+
+# Overview of Obol
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 35 members that are spread across the world.
+
+The core team is building the Distributed Validator Protocol, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mitigate these risks, community building and credible neutrality must be used as primary design principles.
+
+Obol is focused on scaling consensus by providing permissionless access to Distributed Validators (DVs). We believe that distributed validators will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption, the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing infrastructure.
+
+Similar to how roll-up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking middlewares that can be adopted at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvl/intro.md), a user interface for bootstrapping Distributed Validators
+* [Charon](../charon/intro.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Splits](../sc/introducing-obol-splits.md), a set of solidity smart contracts for the distribution of rewards from Distributed Validators
+* [Obol Testnets](../fr/testnet.md), distributed validator infrastructure for Ethereum public test networks, to enable any sized operator to test their deployment before running Distributed Validators on mainnet.
+
+### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+## The Vision
+
+The road to decentralizing stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+### V1 - Trusted Distributed Validators
+
+
+
+The first version of distributed validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivization is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivization scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivization alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivization layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the delineation between the two.
diff --git a/docs/versioned_docs/version-v0.19.2/sc/README.md b/docs/versioned_docs/version-v0.19.2/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.19.2/sc/introducing-obol-splits.md b/docs/versioned_docs/version-v0.19.2/sc/introducing-obol-splits.md
new file mode 100644
index 0000000000..2f14d01024
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sc/introducing-obol-splits.md
@@ -0,0 +1,89 @@
+---
+sidebar_position: 1
+description: Smart contracts for managing Distributed Validators
+---
+
+# Obol Splits
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators. These contracts include:
+
+- Withdrawal Recipients: Contracts used for a validator's withdrawal address.
+- Split contracts: Contracts to split ether across multiple entities. Developed by [Splits.org](https://splits.org)
+- Split controllers: Contracts that can mutate a splitter's configuration.
+
+Two key goals of validator reward management are:
+
+1. To be able to differentiate reward ether from principal ether such that node operators can be paid a percentage the _reward_ they accrue for the principal provider rather than a percentage of _principal+reward_.
+2. To be able to withdraw the rewards in an ongoing manner without exiting the validator.
+
+Without access to the consensus layer state in the EVM to check a validator's status or balance, and due to the incoming ether being from an irregular state transition, neither of these requirements are easily satisfiable.
+
+The following sections outline different contracts that can be composed to form a solution for one or both goals.
+
+## Withdrawal Recipients
+
+Validators have two streams of revenue, the consensus layer rewards and the execution layer rewards. Withdrawal Recipients focus on the former, receiving the balance skimming from a validator with >32 ether in an ongoing manner, and receiving the principal of the validator upon exit.
+
+### Optimistic Withdrawal Recipient
+
+This is the primary withdrawal recipient Obol uses, as it allows for the separation of reward from principal, as well as permitting the ongoing withdrawal of accruing rewards.
+
+An Optimistic Withdrawal Recipient [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipient.sol) takes three inputs when deployed:
+
+- A _principal_ address: The address that controls where the principal ether will be transferred post-exit.
+- A _reward_ address: The address where the accruing reward ether is transferred to.
+- The amount of ether that makes up the principal.
+
+This contract **assumes that any ether that has appeared in it's address since it was last able to do balance accounting is skimming reward from an ongoing validator** (or number of validators) unless the change is > 16 ether. This means balance skimming is immediately claimable as reward, while an inflow of e.g. 31 ether is tracked as a return of principal (despite being slashed in this example).
+
+:::warning
+
+Worst-case mass slashings can theoretically exceed 16 ether, if this were to occur, the returned principal would be misclassified as a reward, and distributed to the wrong address. This risk is the drawback that makes this contract variant 'optimistic'. If you intend to use this contract type, **it is important you understand and accept this risk**, however minute.
+
+The alternative is to use an splits.org [waterfall contract](https://docs.splits.org/core/waterfall), which won't allow the claiming of rewards until all principal ether has been returned, meaning validators need to be exited for operators to claim their CL rewards.
+
+:::
+
+This contract fits both design goals and can be used with thousands of validators. It is safe to deploy an Optimistic Withdrawal Recipient with a principal higher than you actually end up using, though you should process the accrued rewards before exiting a validator or the reward recipients will be short-changed as that balance may be counted as principal instead of reward the next time the contract is updated. If you activate more validators than you specified in your contract deployment, you will record too much ether as reward and will overpay your reward address with ether that was principal ether, not earned ether. Current iterations of this contract are not designed for editing the amount of principal set.
+
+#### OWR Factory Deployment
+
+The OptimisticWithdrawalRecipient contract is deployed via a [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipientFactory.sol). The factory is deployed at the following addresses on the following chains.
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | [0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522](https://etherscan.io/address/0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522) |
+| Goerli | [0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26](https://goerli.etherscan.io/address/0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26) |
+| Holesky | |
+| Sepolia | [0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a](https://sepolia.etherscan.io/address/0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a) |
+
+### Exitable Withdrawal Recipient
+
+A much awaited feature for proof of stake Ethereum is the ability to trigger the exit of a validator with only the withdrawal address. This is tracked in [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002). Support for this feature will be inheritable in all other withdrawal recipient contracts. This will mitigate the risk to a principal provider of funds being stuck, or a validator being irrecoverably offline.
+
+## Split Contracts
+
+A split, or splitter, is a set of contracts that can divide ether or an ERC20 across a number of addresses. Splits are often used in conjunction with withdrawal recipients. Execution Layer rewards for a DV are directed to a split address through the use of a `fee recipient` address. Splits can be either immutable, or mutable by way of an admin address capable of updating them.
+
+Further information about splits can be found on the splits.org team's [docs site](https://docs.splits.org/). The addresses of their deployments can be found [here](https://docs.splits.org/core/split#addresses).
+
+## Split Controllers
+
+Splits can be completely edited through the use of the `controller` address, however, total editability of a split is not always wanted. A permissive controller and a restrictive controller are given as examples below.
+
+### (Gnosis) SAFE wallet
+
+A [SAFE](https://safe.global/) is a common method to administrate a mutable split. The most well-known deployment of this pattern is the [protocol guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html). The SAFE can arbitrarily update the split to any set of addresses with any valid set of percentages.
+
+### Immutable Split Controller
+
+This is a [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitController.sol) that updates one split configuration with another, exactly once. Only a permissioned address can trigger the change. This contract is suitable for changing a split at an unknown point in future to a configuration pre-defined at deployment.
+
+The Immutable Split Controller [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitControllerFactory.sol) can be found at the following addresses:
+
+| Chain | Address |
+|---------|-------------------------------------------------------------------------------------------------------------------------------|
+| Mainnet | |
+| Goerli | [0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f](https://goerli.etherscan.io/address/0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f) |
+| Holesky | |
+| Sepolia | |
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/README.md b/docs/versioned_docs/version-v0.19.2/sdk/README.md
new file mode 100644
index 0000000000..1bcfa0dc3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/README.md
@@ -0,0 +1,2 @@
+# sdk
+
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/classes/README.md b/docs/versioned_docs/version-v0.19.2/sdk/classes/README.md
new file mode 100644
index 0000000000..46d80f843a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/classes/README.md
@@ -0,0 +1,2 @@
+# classes
+
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/classes/client.md b/docs/versioned_docs/version-v0.19.2/sdk/classes/client.md
new file mode 100644
index 0000000000..c3857ecf87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/classes/client.md
@@ -0,0 +1,155 @@
+# Client
+
+Obol sdk Client can be used for creating, managing and activating distributed validators.
+
+### Extends
+
+* `Base`
+
+### Constructors
+
+#### new Client(config, signer)
+
+> **new Client**(`config`, `signer`?): [`Client`](client.md)
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ----------------- | -------- | --------------------- |
+| `config` | `Object` | Client configurations |
+| `config.baseUrl`? | `string` | obol-api url |
+| `config.chainId`? | `number` | Blockchain network ID |
+| `signer`? | `Signer` | ethersJS Signer |
+
+**Returns**
+
+[`Client`](client.md)
+
+Obol-SDK Client instance
+
+An example of how to instantiate obol-sdk Client: [obolClient](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L29)
+
+**Overrides**
+
+`Base.constructor`
+
+**Source**
+
+index.ts:30
+
+### Methods
+
+#### createClusterDefinition()
+
+> **createClusterDefinition**(`newCluster`): `Promise`< `string` >
+
+Creates a cluster definition which contains cluster configuration.
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ------------ | ----------------------------------------------------- | ----------------------- |
+| `newCluster` | [`ClusterPayload`](../type-aliases/clusterpayload.md) | The new unique cluster. |
+
+**Returns**
+
+`Promise`< `string` >
+
+config\_hash.
+
+**Throws**
+
+On duplicate entries, missing or wrong cluster keys.
+
+An example of how to use createClusterDefinition: [createObolCluster](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+**Source**
+
+index.ts:45
+
+***
+
+#### acceptClusterDefinition()
+
+> **acceptClusterDefinition**(`operatorPayload`, `configHash`): `Promise`< [`ClusterDefintion`](../interfaces/clusterdefintion.md) >
+
+Approves joining a cluster with specific configuration.
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ----------------- | ------------------------------------------------------- | ---------------------------------------------------------------------- |
+| `operatorPayload` | [`OperatorPayload`](../type-aliases/operatorpayload.md) | The operator data including signatures. |
+| `configHash` | `string` | The config hash of the cluster which the operator confirms joining to. |
+
+**Returns**
+
+`Promise`< [`ClusterDefintion`](../interfaces/clusterdefintion.md) >
+
+The cluster definition.
+
+**Throws**
+
+On unauthorized, duplicate entries, missing keys, not found cluster or invalid data.
+
+An example of how to use acceptClusterDefinition: [acceptClusterDefinition](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+**Source**
+
+index.ts:96
+
+***
+
+#### getClusterDefinition()
+
+> **getClusterDefinition**(`configHash`): `Promise`< [`ClusterDefintion`](../interfaces/clusterdefintion.md) >
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ------------ | -------- | ---------------------------------------------------------- |
+| `configHash` | `string` | The configuration hash returned in createClusterDefinition |
+
+**Returns**
+
+`Promise`< [`ClusterDefintion`](../interfaces/clusterdefintion.md) >
+
+The cluster definition for config hash
+
+**Throws**
+
+On not found config hash.
+
+An example of how to use getClusterDefinition: [getObolClusterDefinition](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+**Source**
+
+index.ts:136
+
+***
+
+#### getClusterLock()
+
+> **getClusterLock**(`configHash`): `Promise`< [`ClusterLock`](../type-aliases/clusterlock.md) >
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ------------ | -------- | -------------------------------------------- |
+| `configHash` | `string` | The configuration hash in cluster-definition |
+
+**Returns**
+
+`Promise`< [`ClusterLock`](../type-aliases/clusterlock.md) >
+
+The matched cluster details (lock) from DB
+
+**Throws**
+
+On not found cluster definition or lock.
+
+An example of how to use getClusterLock: [getObolClusterLock](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+**Source**
+
+index.ts:152
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/enumerations/README.md b/docs/versioned_docs/version-v0.19.2/sdk/enumerations/README.md
new file mode 100644
index 0000000000..ec74a1ba13
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/enumerations/README.md
@@ -0,0 +1,2 @@
+# enumerations
+
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/enumerations/fork_mapping.md b/docs/versioned_docs/version-v0.19.2/sdk/enumerations/fork_mapping.md
new file mode 100644
index 0000000000..2af793e39d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/enumerations/fork_mapping.md
@@ -0,0 +1,10 @@
+Permitted ChainID's
+
+## Enumeration Members
+
+| Enumeration Member | Value | Description |
+| :------ | :------ | :------ |
+| `0x00000000` | `1` | Mainnet. |
+| `0x00001020` | `5` | Goerli/Prater. |
+| `0x00000064` | `100` | Gnosis Chain. |
+| `0x01017000` | `17000` | Holesky. |
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/functions/README.md b/docs/versioned_docs/version-v0.19.2/sdk/functions/README.md
new file mode 100644
index 0000000000..35b3fffdd7
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/functions/README.md
@@ -0,0 +1,2 @@
+# functions
+
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/functions/validateclusterlock.md b/docs/versioned_docs/version-v0.19.2/sdk/functions/validateclusterlock.md
new file mode 100644
index 0000000000..7c8314578d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/functions/validateclusterlock.md
@@ -0,0 +1,27 @@
+# validateClusterLock
+
+> **validateClusterLock**(`lock`): `Promise`< `boolean` >
+
+Verifies Cluster Lock's validity.
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ----------------------------------------------- | ------------ |
+| `lock` | [`ClusterLock`](../type-aliases/clusterlock.md) | cluster lock |
+
+### Returns
+
+`Promise`< `boolean` >
+
+boolean result to indicate if lock is valid
+
+### Throws
+
+on missing keys or values.
+
+An example of how to use validateClusterLock: [validateClusterLock](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts)
+
+### Source
+
+services.ts:13
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/index.md b/docs/versioned_docs/version-v0.19.2/sdk/index.md
new file mode 100644
index 0000000000..e94c535c82
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/index.md
@@ -0,0 +1,44 @@
+---
+hide_title: true
+---
+
+# index
+
+
+
+## Obol SDK
+
+This repo contains the Obol Software Development Kit, for creating Distributed Validators with the help of the [Obol API](https://docs.obol.tech/api).
+
+### Getting Started
+
+Checkout our [docs](https://docs.obol.tech/docs/advanced/quickstart-sdk), [examples](https://github.com/ObolNetwork/obol-sdk-examples/), and SDK [reference](https://obolnetwork.github.io/obol-packages). Further guides and walkthroughs coming soon.
+
+### Enumerations
+
+* [FORK\_MAPPING](enumerations/fork_mapping.md)
+
+### Classes
+
+* [Client](classes/client.md)
+
+### Interfaces
+
+* [ClusterDefintion](interfaces/clusterdefintion.md)
+
+### Type Aliases
+
+* [ClusterOperator](type-aliases/clusteroperator.md)
+* [OperatorPayload](type-aliases/operatorpayload.md)
+* [ClusterCreator](type-aliases/clustercreator.md)
+* [ClusterValidator](type-aliases/clustervalidator.md)
+* [ClusterPayload](type-aliases/clusterpayload.md)
+* [BuilderRegistrationMessage](type-aliases/builderregistrationmessage.md)
+* [BuilderRegistration](type-aliases/builderregistration.md)
+* [DepositData](type-aliases/depositdata.md)
+* [DistributedValidator](type-aliases/distributedvalidator.md)
+* [ClusterLock](type-aliases/clusterlock.md)
+
+### Functions
+
+* [validateClusterLock](functions/validateclusterlock.md)
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/interfaces/README.md b/docs/versioned_docs/version-v0.19.2/sdk/interfaces/README.md
new file mode 100644
index 0000000000..95109455d3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/interfaces/README.md
@@ -0,0 +1,2 @@
+# interfaces
+
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/interfaces/clusterdefintion.md b/docs/versioned_docs/version-v0.19.2/sdk/interfaces/clusterdefintion.md
new file mode 100644
index 0000000000..01dd697041
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/interfaces/clusterdefintion.md
@@ -0,0 +1,25 @@
+# ClusterDefintion
+
+Cluster definition data needed for dkg
+
+### Extends
+
+* [`ClusterPayload`](../type-aliases/clusterpayload.md)
+
+### Properties
+
+| Property | Type | Description | Inherited from |
+| ------------------ | ------------------------------------------------------------ | ---------------------------------------------------- | --------------------------- |
+| `name` | `string` | The cluster name. | `ClusterPayload.name` |
+| `operators` | [`ClusterOperator`](../type-aliases/clusteroperator.md)\[] | The cluster nodes operators addresses. | `ClusterPayload.operators` |
+| `validators` | [`ClusterValidator`](../type-aliases/clustervalidator.md)\[] | The clusters validators information. | `ClusterPayload.validators` |
+| `creator` | [`ClusterCreator`](../type-aliases/clustercreator.md) | The creator of the cluster. | - |
+| `version` | `string` | The cluster configuration version. | - |
+| `dkg_algorithm` | `string` | The cluster dkg algorithm. | - |
+| `fork_version` | `string` | The cluster fork version. | - |
+| `uuid` | `string` | The cluster uuid. | - |
+| `timestamp` | `string` | The cluster creation timestamp. | - |
+| `config_hash` | `string` | The cluster configuration hash. | - |
+| `threshold` | `number` | The distributed validator threshold. | - |
+| `num_validators` | `number` | The number of distributed validators in the cluster. | - |
+| `definition_hash?` | `string` | The hash of the cluster definition. | - |
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/README.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/README.md
new file mode 100644
index 0000000000..ef07201c1b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/README.md
@@ -0,0 +1,2 @@
+# type-aliases
+
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/builderregistration.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/builderregistration.md
new file mode 100644
index 0000000000..5451854308
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/builderregistration.md
@@ -0,0 +1,16 @@
+# BuilderRegistration
+
+> **BuilderRegistration**: `Object`
+
+Pre-generated Signed Validator Builder Registration
+
+### Type declaration
+
+| Member | Type | Description |
+| ----------- | ------------------------------------------------------------- | -------------------------------------------------- |
+| `message` | [`BuilderRegistrationMessage`](builderregistrationmessage.md) | Builder registration message. |
+| `signature` | `string` | BLS signature of the builder registration message. |
+
+### Source
+
+types.ts:143
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/builderregistrationmessage.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/builderregistrationmessage.md
new file mode 100644
index 0000000000..bb2d179c3e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/builderregistrationmessage.md
@@ -0,0 +1,16 @@
+> **BuilderRegistrationMessage**: `Object`
+
+Unsigned DV Builder Registration Message
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `fee_recipient` | `string` | The DV fee recipient. |
+| `gas_limit` | `number` | Default is 30000000. |
+| `timestamp` | `number` | Timestamp when generating cluster lock file. |
+| `pubkey` | `string` | The public key of the DV. |
+
+## Source
+
+types.ts:125
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clustercreator.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clustercreator.md
new file mode 100644
index 0000000000..0d82298a15
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clustercreator.md
@@ -0,0 +1,14 @@
+> **ClusterCreator**: `Object`
+
+Cluster creator data
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `address` | `string` | The creator address. |
+| `config_signature` | `string` | The cluster configuration signature. |
+
+## Source
+
+types.ts:51
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusterlock.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusterlock.md
new file mode 100644
index 0000000000..a5849618f3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusterlock.md
@@ -0,0 +1,19 @@
+# ClusterLock
+
+> **ClusterLock**: `Object`
+
+Cluster Details after DKG is complete
+
+### Type declaration
+
+| Member | Type | Description |
+| ------------------------ | ------------------------------------------------------- | ----------------------------------------------------------- |
+| `cluster_definition` | [`ClusterDefintion`](../interfaces/clusterdefintion.md) | The cluster definition. |
+| `distributed_validators` | [`DistributedValidator`](distributedvalidator.md)\[] | The cluster distributed validators. |
+| `signature_aggregate` | `string` | The cluster bls signature aggregate. |
+| `lock_hash` | `string` | The hash of the cluster lock. |
+| `node_signatures` | `string`\[] | Node Signature for the lock hash by the node secp256k1 key. |
+
+### Source
+
+types.ts:194
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusteroperator.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusteroperator.md
new file mode 100644
index 0000000000..761521dd0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusteroperator.md
@@ -0,0 +1,18 @@
+> **ClusterOperator**: `Object`
+
+Node operator data
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `address` | `string` | The operator address. |
+| `enr` | `string` | The operator ethereum node record. |
+| `fork_version` | `string` | The cluster fork_version. |
+| `version` | `string` | The cluster version. |
+| `enr_signature` | `string` | The operator enr signature. |
+| `config_signature` | `string` | The operator configuration signature. |
+
+## Source
+
+types.ts:22
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusterpayload.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusterpayload.md
new file mode 100644
index 0000000000..29ad3b157b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clusterpayload.md
@@ -0,0 +1,17 @@
+# ClusterPayload
+
+> **ClusterPayload**: `Object`
+
+Cluster configuration
+
+### Type declaration
+
+| Member | Type | Description |
+| ------------ | -------------------------------------------- | -------------------------------------- |
+| `name` | `string` | The cluster name. |
+| `operators` | [`ClusterOperator`](clusteroperator.md)\[] | The cluster nodes operators addresses. |
+| `validators` | [`ClusterValidator`](clustervalidator.md)\[] | The clusters validators information. |
+
+### Source
+
+types.ts:74
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clustervalidator.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clustervalidator.md
new file mode 100644
index 0000000000..c9048d43f4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/clustervalidator.md
@@ -0,0 +1,14 @@
+> **ClusterValidator**: `Object`
+
+Validator withdrawal configuration
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `fee_recipient_address` | `string` | The validator fee recipient address. |
+| `withdrawal_address` | `string` | The validator reward address. |
+
+## Source
+
+types.ts:62
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/depositdata.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/depositdata.md
new file mode 100644
index 0000000000..9e9c4326f2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/depositdata.md
@@ -0,0 +1,17 @@
+> **DepositData**: `Object`
+
+Required deposit data for validator activation
+
+## Type declaration
+
+| Member | Type | Description |
+| :------ | :------ | :------ |
+| `pubkey` | `string` | The public key of the distributed validator. |
+| `withdrawal_credentials` | `string` | The 0x01 withdrawal address of the DV. |
+| `amount` | `string` | 32 ethers. |
+| `deposit_data_root` | `string` | A checksum for DepositData fields . |
+| `signature` | `string` | BLS signature of the deposit message. |
+
+## Source
+
+types.ts:155
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/distributedvalidator.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/distributedvalidator.md
new file mode 100644
index 0000000000..8c4195b247
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/distributedvalidator.md
@@ -0,0 +1,18 @@
+# DistributedValidator
+
+> **DistributedValidator**: `Object`
+
+Required deposit data for validator activation
+
+### Type declaration
+
+| Member | Type | Description |
+| ------------------------ | ----------------------------------------------- | ---------------------------------------------------------------------------------- |
+| `distributed_public_key` | `string` | The public key of the distributed validator. |
+| `public_shares` | `string`\[] | The public key of the node distributed validator share. |
+| `deposit_data` | `Partial`< [`DepositData`](depositdata.md) > | The required deposit data for activating the DV. |
+| `builder_registration` | [`BuilderRegistration`](builderregistration.md) | pre-generated signed validator builder registration to be sent to builder network. |
+
+### Source
+
+types.ts:176
diff --git a/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/operatorpayload.md b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/operatorpayload.md
new file mode 100644
index 0000000000..b922105da2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sdk/type-aliases/operatorpayload.md
@@ -0,0 +1,9 @@
+# OperatorPayload
+
+> **OperatorPayload**: `Partial`< [`ClusterOperator`](clusteroperator.md) > & `Required`< `Pick`< [`ClusterOperator`](clusteroperator.md), `"enr"` | `"version"` > >
+
+A partial view of `ClusterOperator` with `enr` and `version` as required properties.
+
+### Source
+
+types.ts:46
diff --git a/docs/versioned_docs/version-v0.19.2/sec/README.md b/docs/versioned_docs/version-v0.19.2/sec/README.md
new file mode 100644
index 0000000000..aeb3b02cce
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sec/README.md
@@ -0,0 +1,2 @@
+# sec
+
diff --git a/docs/versioned_docs/version-v0.19.2/sec/bug-bounty.md b/docs/versioned_docs/version-v0.19.2/sec/bug-bounty.md
new file mode 100644
index 0000000000..cd7ec0c909
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sec/bug-bounty.md
@@ -0,0 +1,180 @@
+---
+sidebar_position: 2
+description: Bug Bounty Policy
+---
+
+# Obol Bug Bounty Program
+
+## Overview
+
+At Obol Labs, we prioritize the security of our distributed validator software and related services. Our Bug Bounty Program is designed to encourage and reward security researchers for identifying and reporting potential vulnerabilities. This initiative supports our commitment to the security and integrity of our products.
+
+## Participant Eligibility
+
+Participants must meet the following criteria to be eligible for the Bug Bounty Program:
+
+- Not reside in countries where participation in such programs is prohibited.
+- Be at least 14 years of age and possess the legal capacity to participate.
+- Have received consent from your employer, if applicable.
+- Not have been employed or contracted by Obol Labs, nor be an immediate family member of an employee, within the last 12 months.
+
+## Scope of the Program
+
+Eligible submissions must involve software and services developed by Obol, specifically under the domains of:
+
+- Charon the DV Middleware Client
+- Obol DV Launchpad and Public API
+- Obol Splits Contracts
+- Obol Labs hosted Public Relay Infrastructure
+
+Submissions related to the following are considered out of scope:
+
+- Social engineering
+- Rate Limiting (Non-critical issues)
+- Physical security breaches
+- Non-security related UX/UI issues
+- Third-party application vulnerabilities
+- The [Obol](https://obol.tech) static website or the Obol infrastructure
+- The operational security of node operators running or using Obol software
+
+## Program Rules
+
+- Submitted bugs must not have been previously disclosed publicly.
+- Only first reports of vulnerabilities will be considered for rewards; previously reported or known vulnerabilities are ineligible.
+- The severity of the vulnerability, as assessed by our team, will determine the reward amount. See the "Rewards" section for details.
+- Submissions must include a reproducible proof of concept.
+- The Obol security team reserves the right to determine the eligibility and reward for each submission.
+- Program terms may be updated at Obol's discretion.
+- Valid bugs may be disclosed to partner protocols within the Obol ecosystem to enhance overall security.
+
+## Rewards Structure
+
+Rewards are issued based on the severity and impact of the disclosed vulnerability, determined at the discretion of Obol Labs.
+
+### Critical Vulnerabilities: Up to $100,000
+
+A Critical-level vulnerability is one that has a severe impact on the security of the in-production system from an unauthenicated external attacker, and requires immediate attention to fix. Highly likely to have a material impact on validator private key security, and/or loss of funds.
+
+- High impact, high likelihood
+
+Impacts:
+
+- Attacker that is not a member of the cluster can successfully exfiltrate BLS (not K1) private key material from a threshold number of operators in the cluster.
+- Attacker that is not a member of the cluster can achieve the production of arbitrary BLS signatures from a threshold number of operators in the cluster.
+- Attacker can craft a malicious cluster invite capable of subverting even careful review of all data to steal funds during a deposit.
+- Direct theft of any user funds, whether at-rest or in-motion, other than unclaimed yield
+- Direct loss of funds
+- Permanent freezing of funds (fix requires hard fork)
+- Network not being able to confirm new transactions (Total network shutdown)
+- Protocol insolvency
+
+### High Vulnerabilities: Up to $10,000
+
+For significant security risks that impact the system from a position of low-trust and requires a significant effort to fix.
+
+- High impact, medium likelihood
+- Medium impact, high likelihood
+
+Impacts:
+
+- Attacker that is not a member of the cluster can successfully partition the cluster and keep the cluster offline indefinitely.
+- Attacker that is not a member of the cluster can exfiltrate charon ENR private keys.
+- Attacker that is not a member of the cluster can destroy funds but cannot steal them.
+- Unintended chain split (Network partition)
+- Temporary freezing of network transactions by delaying one block by 500% or more of the average block time of the preceding 24 hours beyond standard difficulty adjustments
+- RPC API crash affecting projects with greater than or equal to 25% of the market capitalization on top of the respective layer
+- Theft of unclaimed yield
+- Theft of unclaimed royalties
+- Permanent freezing of unclaimed yield
+- Permanent freezing of unclaimed royalties
+- Temporary freezing of funds
+- Retrieve sensitive data/files from a running server:
+ - blockchain keys
+ - database passwords
+ - (this does not include non-sensitive environment variables, open source code, or usernames)
+- Taking state-modifying authenticated actions (with or without blockchain state interaction) on behalf of other users without any interaction by that user, such as:
+ - Changing cluster information
+ - Withdrawals
+ - Making trades
+
+### Medium Vulnerabilities: Up to $2,500
+
+For vulnerabilities with a moderate impact, affecting system availability or integrity.
+
+- High impact, low likelihood
+- Medium impact, medium likelihood
+- Low impact, high likelihood
+
+Impacts:
+
+- Attacker that is a member of a cluster can exfiltrate K1 key material from another member.
+- Attacker that is a member of the cluster can denial of service attack enough peers in the cluster to prevent operation of the validator(s)
+- Attacker that is a member of the cluster can bias the protocol in a manner to control the majority of block proposal opportunities.
+- Attacker can get a DV Launchpad user to inadvertently interact with a smart contract that is not a part of normal operation of the launchpad.
+- Increasing network processing node resource consumption by at least 30% without brute force actions, compared to the preceding 24 hours
+- Shutdown of greater than or equal to 30% of network processing nodes without brute force actions, but does not shut down the network
+- Charon cluster identity private key theft
+- Rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle
+- Charon public relay node is compromised and lead to cluster topologies getting discovered and disrupted
+- Smart contract unable to operate due to lack of token funds
+- Block stuffing
+- Griefing (e.g. no profit motive for an attacker, but damage to the users or the protocol)
+- Theft of gas
+- Unbounded gas consumption
+- Redirecting users to malicious websites (Open Redirect)
+
+### Low Vulnerabilities: Up to $500
+
+For vulnerabilities with minimal impact, unlikely to significantly affect system operations.
+
+- Low impact, medium likelihood
+- Medium impact, low likelihood
+
+Impacts:
+
+- Attacker can sometimes put a charon node in a state that causes it to drop one out of every one hundred attestations made by a validator
+- Attacker can display bad data on a non-interactive part of the launchpad.
+- Contract fails to deliver promised returns, but doesn't lose value
+- Shutdown of greater than 10% or equal to but less than 30% of network processing nodes without brute force actions, but does not shut down the network
+- Changing details of other users (including modifying browser local storage) without already-connected wallet interaction and with significant user interaction such as:
+ - Iframing leading to modifying the backend/browser state (must demonstrate impact with PoC)
+- Taking over broken or expired outgoing links such as:
+ - Social media handles, etc.
+- Temporarily disabling user to access target site, such as:
+ - Locking up the victim from login
+ - Cookie bombing, etc.
+
+Rewards may be issued as cash, merchandise, or other forms of recognition, at Obol's discretion. Only one reward will be granted per unique vulnerability.
+
+## The following activities are prohibited by this bug bounty program
+
+- Any testing on mainnet or public testnet deployed code; all testing should be done on local-forks of either public testnet or mainnet
+- Any testing with pricing oracles or third-party smart contracts
+- Attempting phishing or other social engineering attacks against our employees and/or customers
+- Any testing with third-party systems and applications (e.g. browser extensions) as well as websites (e.g. SSO providers, advertising networks)
+- Any denial of service attacks that are executed against project assets
+- Automated testing of services that generates significant amounts of traffic
+- Public disclosure of an unpatched vulnerability in an embargoed bounty
+
+## Submission process
+
+To report a vulnerability, please contact us at security@obol.tech with:
+
+- A detailed description of the vulnerability and its potential impact.
+- Steps to reproduce the issue.
+- Any relevant proof of concept code, screenshots, or documentation.
+- Your contact information.
+
+Incomplete reports may not be eligible for rewards.
+
+## Disclosure and Confidentiality
+
+Obol Labs will disclose vulnerabilities and the identity of the researcher (with consent) after remediation. Researchers are required to maintain confidentiality until official disclosure by Obol Labs.
+
+## Legal and Ethical Compliance
+
+Participants must adhere to all relevant laws and regulations. Obol Labs will not pursue legal action against researchers reporting vulnerabilities in good faith, but reserves the right to respond to violations of this policy.
+
+## Non-Disclosure Agreement (NDA)
+
+Participants may be required to sign an NDA for access to certain proprietary information during their research.
diff --git a/docs/versioned_docs/version-v0.19.2/sec/contact.md b/docs/versioned_docs/version-v0.19.2/sec/contact.md
new file mode 100644
index 0000000000..e66e1663e2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sec/contact.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 3
+description: Security details for the Obol Network
+---
+
+# Contacts
+
+Please email security@obol.tech to report a security incident, vulnerability, bug or inquire about Obol's security.
+
+Also, visit the [obol security repo](https://github.com/ObolNetwork/obol-security) for more details.
diff --git a/docs/versioned_docs/version-v0.19.2/sec/ev-assessment.md b/docs/versioned_docs/version-v0.19.2/sec/ev-assessment.md
new file mode 100644
index 0000000000..a8ce756359
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sec/ev-assessment.md
@@ -0,0 +1,295 @@
+---
+sidebar_position: 4
+description: Software Development Security Assessment
+---
+
+# ev-assessment
+
+## Software Development at Obol
+
+When hardening a projects technical security, team member's operational security, and the security of the software development practices in use by the team are some of the most criticial areas to secure. Many hacks and compromises in the space to date have been a result of these attack vectors rather than exploits of the software itself.
+
+With this in mind, in January 2023 the Obol team retained the expertise of Ethereal Venture's security researcher Alex Wade; to interview key stakeholders and produce a report into the teams Software Development Lifecycle.
+
+The below page is a result of the report that was produced. What is present here has had some sensitive information redacted, and contains responses to the recommendations made, detailing the actions the Obol team have taken to mitigate what has been highlighted.
+
+## Obol Report
+
+**Prepared by: Alex Wade (Ethereal Ventures)** **Date: Jan 2023**
+
+Over the past month, I worked with Obol to review their software development practices in preparation for their upcoming security audits. My goals were to review and analyze:
+
+* Software development processes
+* Vulnerability disclosure and escalation procedures
+* Key personnel risk
+
+The information in this report was collected through a series of interviews with Obol’s project leads.
+
+### Contents:
+
+* Background Info
+* Analysis - Cluster Setup and DKG
+ * Key Risks
+ * Potential Attack Scenarios
+* Recommendations
+ * R1: Users should deploy cluster contracts through a known on-chain entry point
+ * R2: Users should deposit to the beacon chain through a pool contract
+ * R3: Raise the barrier to entry to push an update to the Launchpad
+* Additional Notes
+ * Vulnerability Disclosure
+ * Key Personnel Risk
+
+### Background Info
+
+**Each team lead was asked to describe Obol in terms of its goals, objectives, and key features.**
+
+#### What is Obol?
+
+Obol builds DVT (Distributed Validator Technology) for Ethereum.
+
+#### What is Obol’s goal?
+
+Obol’s goal is to solve a classic distributed systems problem: uptime.
+
+Rather than requiring Ethereum validators to stake on their own, Obol allows groups of operators to stake together. Using Obol, a single validator can be run cooperatively by multiple people across multiple machines.
+
+In theory, this architecture provides validators with some redundancy against common issues: server and power outages, client failures, and more.
+
+#### What are Obol’s objectives?
+
+Obol’s business objective is to provide base-layer infrastructure to support a distributed validator ecosystem. As Obol provides base layer technology, other companies and projects will build on top of Obol.
+
+Obol’s business model is to eventually capture a portion of the revenue generated by validators that use Obol infrastructure.
+
+#### What is Obol’s product?
+
+Obol’s product consists of three main components, each run by its own team: a webapp, a client, and smart contracts.
+
+* [DV Launchpad](../dvl/intro.md): A webapp to create and manage distributed validators.
+* [Charon](../charon/intro.md): A middleware client that enables operators to run distributed validators.
+* [Solidity](../sc/introducing-obol-splits.md): Withdrawal and fee recipient contracts for use with distributed validators.
+
+### Analysis - Cluster Setup and DKG
+
+The Launchpad guides users through the process of creating a cluster, which defines important parameters like the validator’s fee recipient and withdrawal addresses, as well as the identities of the operators in the cluster. In order to ensure their cluster configuration is correct, users need to rely on a few different factors.
+
+**First, users need to trust the Charon client** to perform the DKG correctly, and validate things like:
+
+* Config file is well-formed and is using the expected version
+* Signatures and ENRs from other operators are valid
+* Cluster config hash is correct
+* DKG succeeds in producing valid signatures
+* Deposit data is well-formed and is correctly generated from the cluster config and DKG.
+
+However, Charon’s validation is limited to the digital: signature checks, cluster file syntax, etc. It does NOT help would-be operators determine whether the other operators listed in their cluster definition are the real people with whom they intend to start a DVT cluster. So -
+
+**Second, users need to come to social consensus with fellow operators.** While the cluster is being set up, it’s important that each operator is an active participant. Each member of the group must validate and confirm that:
+
+* the cluster file correctly reflects their address and node identity, and reflects the information they received from fellow operators
+* the cluster parameters are expected – namely, the number of validators and signing threshold
+
+**Finally, users need to perform independent validation.** Each user should perform their own validation of the cluster definition:
+
+* Is my information correct? (address and ENR)
+* Does the information I received from the group match the cluster definition?
+* Is the ETH2 deposit data correct, and does it match the information in the cluster definition?
+* Are the withdrawal and fee recipient addresses correct?
+
+These final steps are potentially the most difficult, and may require significant technical knowledge.
+
+### Key Risks
+
+#### 1. Validation of Contract Deployment and Deposit Data Relies Heavily on Launchpad
+
+From my interviews, it seems that the user deploys both the withdrawal and fee recipient contracts through the Launchpad.
+
+What I’m picturing is that during the first parts of the cluster setup process, the user is prompted to sign one or more transactions deploying the withdrawal and fee recipient contracts to mainnet. The Launchpad apparently uses an npm package to deploy these contracts: `0xsplits/splits-sdk`, which I assume provides either JSON artifacts or a factory address on chain. The Launchpad then places the deployed contracts into the cluster config file, and the process moves on.
+
+If an attacker has published a malicious update to the Launchpad (or compromised an underlying dependency), the contracts deployed by the Launchpad may be malicious. The questions I’d like to pose are:
+
+* How does the group creator know the Launchpad deployed the correct contracts?
+* How does the rest of the group know the creator deployed the contracts through the Launchpad?
+
+My understanding is that this ultimately comes down to the independent verification that each of the group’s members performs during and after the cluster’s setup phase.
+
+At its worst, this verification might consist solely of the cluster creator confirming to the others that, yes, those addresses match the contracts I deployed through the Launchpad.
+
+A more sophisticated user might verify that not only do the addresses match, but the deployed source code looks roughly correct. However, this step is far out of the realm of many would-be validators. To be really certain that the source code is correct would require auditor-level knowledge.
+
+The risk is that:
+
+* the deployed contracts are NOT the correctly-configured 0xsplits waterfall/fee splitter contracts
+* most users are ill-equipped to make this determination themselves
+* we don’t want to trust the Launchpad as the single source of truth
+
+In the worst case, the cluster may end up depositing with malicious withdrawal or fee recipient credentials. If unnoticed, this may net an attacker the entire withdrawal amount, once the cluster exits.
+
+Note that the same (or similar) risks apply to validation of deposit data, which has the potential to be similarly difficult. I’m a little fuzzy on which part of the Obol stack actually generates the deposit data / deposit transaction, so I can’t speak to this as much. However, I think the mitigation for both of these is roughly the same - read on!
+
+**Mitigation:**
+
+It’s certainly a good idea to make it harder to deploy malicious updates to the Launchpad, but this may not be entirely possible. A higher-yield strategy may be to educate and empower users to perform independent validation of the DVT setup process - without relying on information fed to them by Charon and the Launchpad.
+
+I’ve outlined some ideas for this in #R1 and #R2.
+
+#### 2. Social Consensus, aka “Who sends the 32 ETH?”
+
+Depositing to the beacon chain requires a total of 32 ETH. Obol’s product allows multiple operators to act as a single validator together, which means would-be operators need to agree on how to fund the 32 ETH needed to initiate the deposit.
+
+It is my understanding that currently, this process comes down to trust and loose social consensus. Essentially, the group needs to decide who chips in what amount together, and then trust someone to take the 32 ETH and complete the deposit process correctly (without running away with the money).
+
+Granted, the initial launch of Obol will be open only to a small group of people as the kinks in the system get worked out - but in preparation for an eventual public release, the deposit process needs to be much simpler and far less reliant on trust.
+
+Mitigation: See #R2.
+
+**Potential Attack Scenarios**
+
+During the interview process, I learned that each of Obol’s core components has its own GitHub repo, and that each repo has roughly the same structure in terms of organization and security policies. For each repository:
+
+* There are two overall github organization administrators, and a number of people have administrative control over individual repositories.
+* In order to merge PRs, the submitter needs:
+ * CI/CD checks to pass
+ * Review from one person (anyone at Obol)
+
+Of course, admin access also means the ability to change these settings - so repo admins could theoretically merge PRs without needing checks to pass, and without review/approval, organization admins can control the full GitHub organization.
+
+The following scenarios describe the impact an attack may have.
+
+**1. Publishing a malicious version of the Launchpad, or compromising an underlying dependency**
+
+* Reward: High
+* Difficulty: Medium-Low
+
+As described in Key Risks, publishing a malicious version of the Launchpad has the potential to net the largest payout for an attacker. By tampering with the cluster’s deposit data or withdrawal/fee recipient contracts, an attacker stands to gain 32 ETH or more per compromised cluster.
+
+During the interviews, I learned that merging PRs to main in the Launchpad repo triggers an action that publishes to the site. Given that merges can be performed by an authorized Obol developer, this makes the developers prime targets for social engineering attacks.
+
+Additionally, the use of the `0xsplits/splits-sdk` NPM package to aid in contract deployment may represent a supply chain attack vector. It may be that this applies to other Launchpad dependencies as well.
+
+In any case, with a fairly large surface area and high potential reward, this scenario represents a credible risk to users during the cluster setup and DKG process.
+
+See #R1, #R2, and #R3 for some ideas to address this scenario.
+
+**2. Publishing a malicious version of Charon to new operators**
+
+* Reward: Medium
+* Difficulty: High
+
+During the cluster setup process, Charon is responsible both for validating the cluster configuration produced by the Launchpad, as well as performing a DKG ceremony between a group’s operators.
+
+If new operators use a malicious version of Charon to perform this process, it may be possible to tamper with both of these responsibilities, or even get access to part or all of the underlying validator private key created during DKG.
+
+However, the difficulty of this type of attack seems quite high. An attacker would first need to carry out the same type of social engineering attack described in scenario 1 to publish and tag a new version of Charon. Crucially, users would also need to install the malicious version - unlike the Launchpad, an update here is not pushed directly to users.
+
+As long as Obol is clear and consistent with communication around releases and versioning, it seems unlikely that a user would both install a brand-new, unannounced release, and finish the cluster setup process before being warned about the attack.
+
+**3. Publishing a malicious version of Charon to existing validators**
+
+* Reward: Low
+* Difficulty: High
+
+Once a distributed validator is up and running, much of the danger has passed. As a middleware client, Charon sits between a validator’s consensus and validator clients. As such, it shouldn’t have direct access to a validator’s withdrawal keys nor signing keys.
+
+If existing validators update to a malicious version of Charon, it’s likely the worst thing an attacker could theoretically do is slash the validator, however, assuming charon has no access to any private keys, this would be predicated on one or more validator clients connected to charon also failing to prevent the signing of a slashable message. In practice, a compromised charon client is more likely to pose liveness risks than safety risks.
+
+This is not likely to be particularly motivating to potential attackers - and paired with the high difficulty described above, this scenario seems unlikely to cause significant issues.
+
+### Recommendations
+
+#### R1: Users should deploy cluster contracts through a known on-chain entry point
+
+During setup, users should only sign one transaction via the Launchpad - to a contract located at an Obol-held ENS (e.g. `launchpad.obol.eth`). This contract should deploy everything needed for the cluster to operate, like the withdrawal and fee recipient contracts. It should also initialize them with the provided reward split configuration (and any other config needed).
+
+Rather than using an NPM library to supply a factory address or JSON artifacts, this has the benefit of being both:
+
+* **Harder to compromise:** as long as the user knows launchpad.obol.eth, it’s pretty difficult to trick them into deploying the wrong contracts.
+* **Easier to validate** for non-technical users: the Obol contract can be queried for deployment information via etherscan. For example:
+
+
+
+Note that in order for this to be successful, Obol needs to provide detailed steps for users to perform manual validation of their cluster setups. Users should be able to treat this as a “checklist:”
+
+* Did I send a transaction to `launchpad.obol.eth`?
+* Can I use the ENS name to locate and query the deployment manager contract on etherscan?
+* If I input my address, does etherscan report the configuration I was expecting?
+ * withdrawal address matches
+ * fee recipient address matches
+ * reward split configuration matches
+
+As long as these steps are plastered all over the place (i.e. not just on the Launchpad) and Obol puts in effort to educate users about the process, this approach should allow users to validate cluster configurations themselves - regardless of Launchpad or NPM package compromise.
+
+**Obol’s response:**
+
+Roadmapped: add the ability for the OWR factory to claim and transfer its reverse resolution ownership.
+
+#### R2: Users should deposit to the beacon chain through a pool contract
+
+Once cluster setup and DKG is complete, a group of operators should deposit to the beacon chain by way of a pool contract. The pool contract should:
+
+* Accept Eth from any of the group’s operators
+* Stop accepting Eth when the contract’s balance hits (32 ETH \* number of validators)
+* Make it easy to pull the trigger and deposit to the beacon chain once the critical balance has been reached
+* Offer all of the group’s operators a “bail” option at any point before the deposit is triggered
+
+Ideally, this contract is deployed during the setup process described in #R1, as another step toward allowing users to perform independent validation of the process.
+
+Rather than relying on social consensus, this should:
+
+* Allow operators to fund the validator without needing to trust any single party
+* Make it harder to mess up the deposit or send funds to some malicious actor, as the pool contract should know what the beacon deposit contract address is
+
+**Obol’s response:**
+
+Roadmapped: give the operators a streamlined, secure way to deposit Ether (ETH) to the beacon chain collectively, satisfying specific conditions:
+
+* Pooling from multiple operators.
+* Ceasing to accept ETH once a critical balance is reached, defined by 32 ETH multiplied by the number of validators.
+* Facilitating an immediate deposit to the beacon chain once the target balance is reached.
+* Provide a 'bail-out' option for operators to withdraw their contribution before initiating the group's deposit to the beacon chain.
+
+#### R3: Raise the barrier to entry to push an update to the Launchpad
+
+Currently, any repo admin can publish an update to the Launchpad unchecked.
+
+Given the risks and scenarios outlined above, consider amending this process so that the sole compromise of either admin is not sufficient to publish to the Launchpad site. It may be worthwhile to require both admins to approve publishing to the site.
+
+Along with simply adding additional prerequisites to publish an update to the Launchpad, ensure that both admins have enabled some level of multi-factor authentication on their GitHub accounts.
+
+**Obol’s response:**
+
+We removed individual’s ability to merge changes without review, enforced MFA, signed commits, and employed Bulldozer bot to make sure a PR gets merged automatically when all checks pass.
+
+### Additional Notes
+
+#### Vulnerability Disclosure
+
+During the interviews, I got some conflicting information when asking about Obol’s vulnerability disclosure process.
+
+Some interviewees directed me towards Obol’s security repo, which details security contacts: [ObolNetwork/obol-security](https://github.com/ObolNetwork/obol-security), while some answered that disclosure should happen primarily through Immunefi. While these may both be part of the correct answer, it seems that Obol’s disclosure process may not be as well-defined as it could be. Here are some notes:
+
+* I wasn’t able to find information about Obol on Immunefi. I also didn’t find any reference to a security contact or disclosure policy in Obol’s docs.
+* When looking into the obol security repo, I noticed broken links in a few of the sections in README.md and SECURITY.md:
+ * Security policy
+ * More Information
+* Some of the text and links in the Bug Bounty Program don’t seem to apply to Obol (see text referring to Vaults and Strategies).
+* The Receiving Disclosures section does not include a public key with which submitters can encrypt vulnerability information.
+
+It’s my understanding that these items are probably lower priority due to Obol’s initial closed launch - but these should be squared away soon! \[Obol response to latest vuln disclosure process goes here]
+
+**Obol’s response:**
+
+we addressed all of the concerns in the obol-security repository:
+
+1. The security policy link has been fixed
+2. The Bug Bounty program received an overhaul and clearly states rewards, eligibility, and scope
+3. We list two GPG public keys for which we accept encrypted vulnerabilities reports.
+
+We are actively working towards integrating Immunefi in our security pipeline.
+
+#### Key Personnel Risk
+
+A final section on the specifics of key personnel risk faced by Obol has been redacted from the original report. Particular areas of control highlighted were github org ownership and domain name control.
+
+**Obol’s response:**
+
+These risks have been mitigated by adding an extra admin to the github org, and by setting up a second DNS stack in case the primary one fails, along with general Opsec improvements.
diff --git a/docs/versioned_docs/version-v0.19.2/sec/overview.md b/docs/versioned_docs/version-v0.19.2/sec/overview.md
new file mode 100644
index 0000000000..f52f8550ca
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sec/overview.md
@@ -0,0 +1,33 @@
+---
+sidebar_position: 1
+description: Security Overview
+---
+
+# Overview
+
+This page serves as an overview of the Obol Network from a security point of view.
+
+This page is updated quarterly. The last update was on 2023-10-01.
+
+## Table of Contents
+
+1. [List of Security Audits and Assessments](overview.md#list-of-security-audits-and-assessments)
+2. [Security Focused Documents](overview.md#security-focused-documents)
+3. [Bug Bounty Details](bug-bounty.md)
+
+## List of Security Audits and Assessments
+
+The completed audits reports are linked [here](https://github.com/ObolNetwork/obol-security/tree/main/audits).
+
+* A review of Obol Labs [development processes](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/sec/ev-assessment/README.md) by [Ethereal Ventures](https://www.etherealventures.com/).
+* A [security assessment](https://github.com/ObolNetwork/obol-security/blob/f9d7b0ad0bb8897f74ccb34cd4bd83012ad1d2b5/audits/Sigma_Prime_Obol_Network_Charon_Security_Assessment_Report_v2_1.pdf) of Charon by [Sigma Prime](https://sigmaprime.io/) resulting in version [`v0.16.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.16.0).
+* A second [assessment of Charon](https://obol.tech/charon_quantstamp_assessment.pdf) by [QuantStamp](https://quantstamp.com/) resulting in version [`v0.19.1`](https://github.com/ObolNetwork/charon/releases/tag/v0.19.1).
+* A [solidity audit](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/sec/smart_contract_audit/README.md) of the Obol Splits contracts by [Zach Obront](https://zachobront.com/).
+
+## Security focused documents
+
+* A [threat model](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/sec/threat_model/README.md) for a DV middleware client like charon.
+
+## Bug Bounty
+
+Information related to disclosing bugs and vulnerabilities to Obol can be found on [the next page](bug-bounty.md).
diff --git a/docs/versioned_docs/version-v0.19.2/sec/smart_contract_audit.md b/docs/versioned_docs/version-v0.19.2/sec/smart_contract_audit.md
new file mode 100644
index 0000000000..5f079f2997
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sec/smart_contract_audit.md
@@ -0,0 +1,477 @@
+---
+sidebar_position: 5
+description: Smart Contract Audit
+---
+
+# Smart Contract Audit
+
+| | |
+| ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|  | Obol Audit Report
Obol Manager Contracts
Prepared by: Zach Obront, Independent Security Researcher
Date: Sept 18 to 22, 2023
|
+
+## About **Obol**
+
+The Obol Network is an ecosystem for trust minimized staking that enables people to create, test, run & co-ordinate distributed validators.
+
+The Obol Manager contracts are responsible for distributing validator rewards and withdrawals among the validator and node operators involved in a distributed validator.
+
+## About **zachobront**
+
+Zach Obront is an independent smart contract security researcher. He serves as a Lead Senior Watson at Sherlock, a Security Researcher at Spearbit, and has identified multiple critical severity bugs in the wild, including in a Top 5 Protocol on Immunefi. You can say hi on Twitter at [@zachobront](http://twitter.com/zachobront).
+
+## Summary & Scope
+
+The [ObolNetwork/obol-manager-contracts](https://github.com/ObolNetwork/obol-manager-contracts/) repository was audited at commit [50ce277919723c80b96f6353fa8d1f8facda6e0e](https://github.com/ObolNetwork/obol-manager-contracts/tree/50ce277919723c80b96f6353fa8d1f8facda6e0e).
+
+The following contracts were in scope:
+
+* src/controllers/ImmutableSplitController.sol
+* src/controllers/ImmutableSplitControllerFactory.sol
+* src/lido/LidoSplit.sol
+* src/lido/LidoSplitFactory.sol
+* src/owr/OptimisticWithdrawalReceiver.sol
+* src/owr/OptimisticWithdrawalReceiverFactory.sol
+
+After completion of the fixes, the [2f4f059bfd145f5f05d794948c918d65d222c3a9](https://github.com/ObolNetwork/obol-manager-contracts/tree/2f4f059bfd145f5f05d794948c918d65d222c3a9) commit was reviewed. After this review, the updated Lido fee share system in [PR #96](https://github.com/ObolNetwork/obol-manager-contracts/pull/96/files) (at commit [fd244a05f964617707b0a40ebb11b523bbd683b8](https://github.com/ObolNetwork/obol-splits/pull/96/commits/fd244a05f964617707b0a40ebb11b523bbd683b8)) was reviewed.
+
+## Summary of Findings
+
+| Identifier | Title | Severity | Fixed |
+| :-----------------------------------------------------------------------------------------------------------------------: | -------------------------------------------------------------------------------------- | :-----------: | :---: |
+| [M-01](smart_contract_audit.md#m-01-future-fees-may-be-skirted-by-setting-a-non-eth-reward-token) | Future fees may be skirted by setting a non-ETH reward token | Medium | ✓ |
+| [M-02](smart_contract_audit.md#m-02-splits-with-256-or-more-node-operators-will-not-be-able-to-switch-on-fees) | Splits with 256 or more node operators will not be able to switch on fees | Medium | ✓ |
+| [M-03](smart_contract_audit.md#m-03-in-a-mass-slashing-event-node-operators-are-incentivized-to-get-slashed) | In a mass slashing event, node operators are incentivized to get slashed | Medium | |
+| [L-01](smart_contract_audit.md#l-01-obol-fees-will-be-applied-retroactively-to-all-non-distributed-funds-in-the-splitter) | Obol fees will be applied retroactively to all non-distributed funds in the Splitter | Low | ✓ |
+| [L-02](smart_contract_audit.md#l-02-if-owr-is-used-with-rebase-tokens-and-theres-a-negative-rebase-principal-can-be-lost) | If OWR is used with rebase tokens and there's a negative rebase, principal can be lost | Low | ✓ |
+| [L-03](smart_contract_audit.md#l-03-lidosplit-can-receive-eth-which-will-be-locked-in-contract) | LidoSplit can receive ETH, which will be locked in contract | Low | ✓ |
+| [L-04](smart_contract_audit.md#l-04-upgrade-to-latest-version-of-solady-to-fix-libclone-bug) | Upgrade to latest version of Solady to fix LibClone bug | Low | ✓ |
+| [G-01](smart_contract_audit.md#g-01-steth-and-wsteth-addresses-can-be-saved-on-implementation-to-save-gas) | stETH and wstETH addresses can be saved on implementation to save gas | Gas | ✓ |
+| [G-02](smart_contract_audit.md#g-02-owr-can-be-simplified-and-save-gas-by-not-tracking-distributedfunds) | OWR can be simplified and save gas by not tracking distributedFunds | Gas | ✓ |
+| [I-01](smart_contract_audit.md#i-01-strong-trust-assumptions-between-validators-and-node-operators) | Strong trust assumptions between validators and node operators | Informational | |
+| [I-02](smart_contract_audit.md#i-02-provide-node-operator-checklist-to-validate-setup) | Provide node operator checklist to validate setup | Informational | |
+
+## Detailed Findings
+
+### \[M-01] Future fees may be skirted by setting a non-ETH reward token
+
+Fees are planned to be implemented on the `rewardRecipient` splitter by updating to a new fee structure using the `ImmutableSplitController`.
+
+It is assumed that all rewards will flow through the splitter, because (a) all distributed rewards less than 16 ETH are sent to the `rewardRecipient`, and (b) even if a team waited for rewards to be greater than 16 ETH, rewards sent to the `principalRecipient` are capped at the `amountOfPrincipalStake`.
+
+This creates a fairly strong guarantee that reward funds will flow to the `rewardRecipient`. Even if a user were to set their `amountOfPrincipalStake` high enough that the `principalRecipient` could receive unlimited funds, the Obol team could call `distributeFunds()` when the balance got near 16 ETH to ensure fees were paid.
+
+However, if the user selects a non-ETH token, all ETH will be withdrawable only thorugh the `recoverFunds()` function. If they set up a split with their node operators as their `recoveryAddress`, all funds will be withdrawable via `recoverFunds()` without ever touching the `rewardRecipient` or paying a fee.
+
+#### Recommendation
+
+I would recommend removing the ability to use a non-ETH token from the `OptimisticWithdrawalRecipient`. Alternatively, if it feels like it may be a use case that is needed, it may make sense to always include ETH as a valid token, in addition to any `OWRToken` set.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[M-02] Splits with 256 or more node operators will not be able to switch on fees
+
+0xSplits is used to distribute rewards across node operators. All Splits are deployed with an ImmutableSplitController, which is given permissions to update the split one time to add a fee for Obol at a future date.
+
+The Factory deploys these controllers as Clones with Immutable Args, hard coding the `owner`, `accounts`, `percentAllocations`, and `distributorFee` for the future update. This data is packed as follows:
+
+```solidity
+ function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+ ) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
+ uint256[] memory recipients = new uint[](recipientsSize);
+
+ uint256 i = 0;
+ for (; i < recipientsSize;) {
+ recipients[i] = (uint256(percentAllocations[i]) << ADDRESS_BITS) | uint256(uint160(accounts[i]));
+
+ unchecked {
+ i++;
+ }
+ }
+
+ data = abi.encodePacked(splitMain, distributorFee, owner, uint8(recipientsSize), recipients);
+ }
+```
+
+In the process, `recipientsSize` is unsafely downcasted into a `uint8`, which has a maximum value of `256`. As a result, any values greater than 256 will overflow and result in a lower value of `recipients.length % 256` being passed as `recipientsSize`.
+
+When the Controller is deployed, the full list of `percentAllocations` is passed to the `validSplit` check, which will pass as expected. However, later, when `updateSplit()` is called, the `getNewSplitConfiguation()` function will only return the first `recipientsSize` accounts, ignoring the rest.
+
+```solidity
+ function getNewSplitConfiguration()
+ public
+ pure
+ returns (address[] memory accounts, uint32[] memory percentAllocations)
+ {
+ // fetch the size first
+ // then parse the data gradually
+ uint256 size = _recipientsSize();
+ accounts = new address[](size);
+ percentAllocations = new uint32[](size);
+
+ uint256 i = 0;
+ for (; i < size;) {
+ uint256 recipient = _getRecipient(i);
+ accounts[i] = address(uint160(recipient));
+ percentAllocations[i] = uint32(recipient >> ADDRESS_BITS);
+ unchecked {
+ i++;
+ }
+ }
+ }
+```
+
+When `updateSplit()` is eventually called on `splitsMain` to turn on fees, the `validSplit()` check on that contract will revert because the sum of the percent allocations will no longer sum to `1e6`, and the update will not be possible.
+
+#### Proof of Concept
+
+The following test can be dropped into a file in `src/test` to demonstrate that passing 400 accounts will result in a `recipientSize` of `400 - 256 = 144`:
+
+```solidity
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.0;
+
+import { Test } from "forge-std/Test.sol";
+import { console } from "forge-std/console.sol";
+import { ImmutableSplitControllerFactory } from "src/controllers/ImmutableSplitControllerFactory.sol";
+import { ImmutableSplitController } from "src/controllers/ImmutableSplitController.sol";
+
+interface ISplitsMain {
+ function createSplit(address[] calldata accounts, uint32[] calldata percentAllocations, uint32 distributorFee, address controller) external returns (address);
+}
+
+contract ZachTest is Test {
+ function testZach_RecipientSizeCappedAt256Accounts() public {
+ vm.createSelectFork("https://mainnet.infura.io/v3/fb419f740b7e401bad5bec77d0d285a5");
+
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](400);
+ uint32[] memory bigPercentAllocations = new uint32[](400);
+
+ for (uint i = 0; i < 400; i++) {
+ bigAccounts[i] = address(uint160(i));
+ bigPercentAllocations[i] = 2500;
+ }
+
+ // confirmation that 0xSplits will allow creating a split with this many accounts
+ // dummy acct passed as controller, but doesn't matter for these purposes
+ address split = ISplitsMain(0x2ed6c4B5dA6378c7897AC67Ba9e43102Feb694EE).createSplit(bigAccounts, bigPercentAllocations, 0, address(8888));
+
+ ImmutableSplitController controller = factory.createController(split, owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+
+ // added a public function to controller to read recipient size directly
+ uint savedRecipientSize = controller.ZachTest__recipientSize();
+ assert(savedRecipientSize < 400);
+ console.log(savedRecipientSize); // 144
+ }
+}
+```
+
+#### Recommendation
+
+When packing the data in `_packSplitControllerData()`, check `recipientsSize` before downcasting to a uint8:
+
+```diff
+function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
++ if (recipientsSize > 256) revert InvalidSplit__TooManyAccounts(recipientSize);
+ ...
+}
+```
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[M-03] In a mass slashing event, node operators are incentivized to get slashed
+
+When the `OptimisticWithdrawalRecipient` receives funds from the beacon chain, it uses the following rule to determine the allocation:
+
+> If the amount of funds to be distributed is greater than or equal to 16 ether, it is assumed that it is a withdrawal (to be returned to the principal, with a cap on principal withdrawals of the total amount they deposited).
+
+> Otherwise, it is assumed that the funds are rewards.
+
+This value being as low as 16 ether protects against any predictable attack the node operator could perform. For example, due to the effect of hysteresis in updating effective balances, it does not seem to be possible for node operators to predictably bleed a withdrawal down to be below 16 ether (even if they timed a slashing perfectly).
+
+However, in the event of a mass slashing event, slashing punishments can be much more severe than they otherwise would be. To calculate the size of a slash, we:
+
+* take the total percentage of validator stake slashed in the 18 days preceding and following a user's slash
+* multiply this percentage by 3 (capped at 100%)
+* the full slashing penalty for a given validator equals 1/32 of their stake, plus the resulting percentage above applied to the remaining 31/32 of their stake
+
+In order for such penalties to bring the withdrawal balance below 16 ether (assuming a full 32 ether to start), we would need the percentage taken to be greater than `15 / 31 = 48.3%`, which implies that `48.3 / 3 = 16.1%` of validators would need to be slashed.
+
+Because the measurement is taken from the 18 days before and after the incident, node operators would have the opportunity to see a mass slashing event unfold, and later decide that they would like to be slashed along with it.
+
+In the event that they observed that greater than 16.1% of validators were slashed, Obol node operators would be able to get themselves slashed, be exited with a withdrawal of less than 16 ether, and claim that withdrawal as rewards, effectively stealing from the principal recipient.
+
+#### Recommendations
+
+Find a solution that provides a higher level of guarantee that the funds withdrawn are actually rewards, and not a withdrawal.
+
+#### Review
+
+Acknowledged. We believe this is a black swan event. It would require a major ETH client to be compromised, and would be a betrayal of trust, so likely not EV+ for doxxed operators. Users of this contract with unknown operators should be wary of such a risk.
+
+### \[L-01] Obol fees will be applied retroactively to all non-distributed funds in the Splitter
+
+When Obol decides to turn on fees, a call will be made to `ImmutableSplitController::updateSplit()`, which will take the predefined split parameters (the original user specified split with Obol's fees added in) and call `updateSplit()` to implement the change.
+
+```solidity
+function updateSplit() external payable {
+ if (msg.sender != owner()) revert Unauthorized();
+
+ (address[] memory accounts, uint32[] memory percentAllocations) = getNewSplitConfiguration();
+
+ ISplitMain(splitMain()).updateSplit(split, accounts, percentAllocations, uint32(distributorFee()));
+}
+```
+
+If we look at the code on `SplitsMain`, we can see that this `updateSplit()` function is applied retroactively to all funds that are already in the split, because it updates the parameters without performing a distribution first:
+
+```solidity
+function updateSplit(
+ address split,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+)
+ external
+ override
+ onlySplitController(split)
+ validSplit(accounts, percentAllocations, distributorFee)
+{
+ _updateSplit(split, accounts, percentAllocations, distributorFee);
+}
+```
+
+This means that any funds that have been sent to the split but have not yet be distributed will be subject to the Obol fee. Since these splitters will be accumulating all execution layer fees, it is possible that some of them may have received large MEV bribes, where this after-the-fact fee could be quite expensive.
+
+#### Recommendation
+
+The most strict solution would be for the `ImmutableSplitController` to store both the old split parameters and the new parameters. The old parameters could first be used to call `distributeETH()` on the split, and then `updateSplit()` could be called with the new parameters.
+
+If storing both sets of values seems too complex, the alternative would be to require that `split.balance <= 1` to update the split. Then the Obol team could simply store the old parameters off chain to call `distributeETH()` on each split to "unlock" it to update the fees.
+
+(Note that for the second solution, the ETH balance should be less than or equal to 1, not 0, because 0xSplits stores empty balances as `1` for gas savings.)
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[L-02] If OWR is used with rebase tokens and there's a negative rebase, principal can be lost
+
+The `OptimisticWithdrawalRecipient` is deployed with a specific token immutably set on the clone. It is presumed that that token will usually be ETH, but it can also be an ERC20 to account for future integrations with tokenized versions of ETH.
+
+In the event that one of these integrations used a rebasing version of ETH (like `stETH`), the architecture would need to be set up as follows:
+
+`OptimisticWithdrawalRecipient => rewards to something like LidoSplit.sol => Split Wallet`
+
+In this case, the OWR would need to be able to handle rebasing tokens.
+
+In the event that rebasing tokens are used, there is the risk that slashing or inactivity leads to a period with a negative rebase. In this case, the following chain of events could happen:
+
+* `distribute(PULL)` is called, setting `fundsPendingWithdrawal == balance`
+* rebasing causes the balance to decrease slightly
+* `distribute(PULL)` is called again, so when `fundsToBeDistributed = balance - fundsPendingWithdrawal` is calculated in an unchecked block, it ends up being near `type(uint256).max`
+* since this is more than `16 ether`, the first `amountOfPrincipalStake - _claimedPrincipalFunds` will be allocated to the principal recipient, and the rest to the reward recipient
+* we check that `endingDistributedFunds <= type(uint128).max`, but unfortunately this check misses the issue, because only `fundsToBeDistributed` underflows, not `endingDistributedFunds`
+* `_claimedPrincipalFunds` is set to `amountOfPrincipalStake`, so all future claims will go to the reward recipient
+* the `pullBalances` for both recipients will be set higher than the balance of the contract, and so will be unusable
+
+In this situation, the only way for the principal to get their funds back would be for the full `amountOfPrincipalStake` to hit the contract at once, and for them to call `withdraw()` before anyone called `distribute(PUSH)`. If anyone was to be able to call `distribute(PUSH)` before them, all principal would be sent to the reward recipient instead.
+
+#### Recommendation
+
+Similar to #74, I would recommend removing the ability for the `OptimisticWithdrawalRecipient` to accept non-ETH tokens.
+
+Otherwise, I would recommend two changes for redundant safety:
+
+1. Do not allow the OWR to be used with rebasing tokens.
+2. Move the `_fundsToBeDistributed = _endingDistributedFunds - _startingDistributedFunds;` out of the unchecked block. The case where `_endingDistributedFunds` underflows is already handled by a later check, so this one change should be sufficient to prevent any risk of this issue.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[L-03] LidoSplit can receive ETH, which will be locked in contract
+
+Each new `LidoSplit` is deployed as a clone, which comes with a `receive()` function for receiving ETH.
+
+However, the only function on `LidoSplit` is `distribute()`, which converts `stETH` to `wstETH` and transfers it to the `splitWallet`.
+
+While this contract should only be used for Lido to pay out rewards (which will come in `stETH`), it seems possible that users may accidentally use the same contract to receive other validator rewards (in ETH), or that Lido governance may introduce ETH payments in the future, which would cause the funds to be locked.
+
+#### Proof of Concept
+
+The following test can be dropped into `LidoSplit.t.sol` to confirm that the clones can currently receive ETH:
+
+```solidity
+function testZach_CanReceiveEth() public {
+ uint before = address(lidoSplit).balance;
+ payable(address(lidoSplit)).transfer(1 ether);
+ assertEq(address(lidoSplit).balance, before + 1 ether);
+}
+```
+
+#### Recommendation
+
+Introduce an additional function to `LidoSplit.sol` which wraps ETH into stETH before calling `distribute()`, in order to rescue any ETH accidentally sent to the contract.
+
+#### Review
+
+Fixed in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87/files) by adding a `rescueFunds()` function that can send ETH or any ERC20 (except `stETH` or `wstETH`) to the `splitWallet`.
+
+### \[L-04] Upgrade to latest version of Solady to fix LibClone bug
+
+In the recent [Solady audit](https://github.com/Vectorized/solady/blob/main/audits/cantina-solady-report.pdf), an issue was found the affects LibClone.
+
+In short, LibClone assumes that the length of the immutable arguments on the clone will fit in 2 bytes. If it's larger, it overlaps other op codes and can lead to strange behaviors, including causing the deployment to fail or causing the deployment to succeed with no resulting bytecode.
+
+Because the `ImmutableSplitControllerFactory` allows the user to input arrays of any length that will be encoded as immutable arguments on the Clone, we can manipulate the length to accomplish these goals.
+
+Fortunately, failed deployments or empty bytecode (which causes a revert when `init()` is called) are not problems in this case, as the transactions will fail, and it can only happen with unrealistically long arrays that would only be used by malicious users.
+
+However, it is difficult to be sure how else this risk might be exploited by using the overflow to jump to later op codes, and it is recommended to update to a newer version of Solady where the issue has been resolved.
+
+#### Proof of Concept
+
+If we comment out the `init()` call in the `createController()` call, we can see that the following test "successfully" deploys the controller, but the result is that there is no bytecode:
+
+```solidity
+function testZach__CreateControllerSoladyBug() public {
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](28672);
+ uint32[] memory bigPercentAllocations = new uint32[](28672);
+
+ for (uint i = 0; i < 28672; i++) {
+ bigAccounts[i] = address(uint160(i));
+ if (i < 32) bigPercentAllocations[i] = 820;
+ else bigPercentAllocations[i] = 34;
+ }
+
+ ImmutableSplitController controller = factory.createController(address(8888), owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+ assert(address(controller) != address(0));
+ assert(address(controller).code.length == 0);
+}
+```
+
+#### Recommendation
+
+Delete Solady and clone it from the most recent commit, or any commit after the fixes from [PR #548](https://github.com/Vectorized/solady/pull/548/files#diff-27a3ba4730de4b778ecba4697ab7dfb9b4f30f9e3666d1e5665b194fe6c9ae45) were merged.
+
+#### Review
+
+Solady has been updated to v.0.0.123 in [PR 88](https://github.com/ObolNetwork/obol-manager-contracts/pull/88).
+
+### \[G-01] stETH and wstETH addresses can be saved on implementation to save gas
+
+The `LidoSplitFactory` contract holds two immutable values for the addresses of the `stETH` and `wstETH` tokens.
+
+When new clones are deployed, these values are encoded as immutable args. This adds the values to the contract code of the clone, so that each time a call is made, they are passed as calldata along to the implementation, which reads the values from the calldata for use.
+
+Since these values will be consistent across all clones on the same chain, it would be more gas efficient to store them in the implementation directly, which can be done with `immutable` storage values, set in the constructor.
+
+This would save 40 bytes of calldata on each call to the clone, which leads to a savings of approximately 640 gas on each call.
+
+#### Recommendation
+
+1. Add the following to `LidoSplit.sol`:
+
+```solidity
+address immutable public stETH;
+address immutable public wstETH;
+```
+
+2. Add a constructor to `LidoSplit.sol` which sets these immutable values. Solidity treats immutable values as constants and stores them directly in the contract bytecode, so they will be accessible from the clones.
+3. Remove `stETH` and `wstETH` from `LidoSplitFactory.sol`, both as storage values, arguments to the constructor, and arguments to `clone()`.
+4. Adjust the `distribute()` function in `LidoSplit.sol` to read the storage values for these two addresses, and remove the helper functions to read the clone's immutable arguments for these two values.
+
+#### Review
+
+Fixed as recommended in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87).
+
+### \[G-02] OWR can be simplified and save gas by not tracking distributedFunds
+
+Currently, the `OptimisticWithdrawalRecipient` contract tracks four variables:
+
+* distributedFunds: total amount of the token distributed via push or pull
+* fundsPendingWithdrawal: total balance distributed via pull that haven't been claimed yet
+* claimedPrincipalFunds: total amount of funds claimed by the principal recipient
+* pullBalances: individual pull balances that haven't been claimed yet
+
+When `_distributeFunds()` is called, we perform the following math (simplified to only include relevant updates):
+
+```solidity
+endingDistributedFunds = distributedFunds - fundsPendingWithdrawal + currentBalance;
+fundsToBeDistributed = endingDistributedFunds - distributedFunds;
+distributedFunds = endingDistributedFunds;
+```
+
+As we can see, `distributedFunds` is added to the `endingDistributedFunds` variable and then removed when calculating `fundsToBeDistributed`, having no impact on the resulting `fundsToBeDistributed` value.
+
+The `distributedFunds` variable is not read or used anywhere else on the contract.
+
+#### Recommendation
+
+We can simplify the math and save substantial gas (a storage write plus additional operations) by not tracking this value at all.
+
+This would allow us to calculate `fundsToBeDistributed` directly, as follows:
+
+```solidity
+fundsToBeDistributed = currentBalance - fundsPendingWithdrawal;
+```
+
+#### Review
+
+Fixed as recommended in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85).
+
+### \[I-01] Strong trust assumptions between validators and node operators
+
+It is assumed that validators and node operators will always act in the best interest of the group, rather than in their selfish best interest.
+
+It is important to make clear to users that there are strong trust assumptions between the various parties involved in the DVT.
+
+Here are a select few examples of attacks that a malicious set of node operators could perform:
+
+1. Since there is currently no mechanism for withdrawals besides the consensus of the node operators, a minority of them sufficient to withhold consensus could blackmail the principal for a payment of up to 16 ether in order to allow them to withdraw. Otherwise, they could turn off their node operators and force the principal to bleed down to a final withdrawn balance of just over 16 ether.
+2. Node operators are all able to propose blocks within the P2P network, which are then propogated out to the rest of the network. Node software is accustomed to signing for blocks built by block builders based on the metadata including quantity of fees and the address they'll be sent to. This is enforced by social consensus, with block builders not wanting to harm validators in order to have their blocks accepted in the future. However, node operators in a DVT are not concerned with the social consensus of the network, and could therefore build blocks that include large MEV payments to their personal address (instead of the DVT's 0xSplit), add fictious metadata to the block header, have their fellow node operators accept the block, and take the MEV for themselves.
+3. While the withdrawal address is immutably set on the beacon chain to the OWR, the fee address is added by the nodes to each block. Any majority of node operators sufficient to reach consensus could create a new 0xSplit with only themselves on it, and use that for all execution layer fees. The principal (and other node operators) would not be able to stop them or withdraw their principal, and would be stuck with staked funds paying fees to the malicious node operators.
+
+Note that there are likely many other possible attacks that malicious node operators could perform. This report is intended to demonstrate some examples of the trust level that is needed between validators and node operators, and to emphasize the importance of making these assumptions clear to users.
+
+#### Review
+
+Acknowledged. We believe EIP 7002 will reduce this trust assumption as it would enable the validator exit via the execution layer withdrawal key.
+
+### \[I-02] Provide node operator checklist to validate setup
+
+There are a number of ways that the user setting up the DVT could plant backdoors to harm the other users involved in the DVT.
+
+Each of these risks is possible to check before signing off on the setup, but some are rather hidden, so it would be useful for the protocol to provide a list of checks that node operators should do before signing off on the setup parameters (or, even better, provide these checks for them through the front end).
+
+1. Confirm that `SplitsMain.getHash(split)` matches the hash of the parameters that the user is expecting to be used.
+2. Confirm that the controller clone delegates to the correct implementation. If not, it could be pointed to delegate to `SplitMain` and then called to `transferControl()` to a user's own address, allowing them to update the split arbitrarily.
+3. `OptimisticWithdrawalRecipient.getTranches()` should be called to check that `amountOfPrincipalStake` is equal to the amount that they will actually be providing.
+4. The controller's `owner` and future split including Obol fees should be provided to the user. They should be able to check that `ImmutableSplitControllerFactory.predictSplitControllerAddress()`, with those parameters inputted, results in the controller that is actually listed on `SplitsMain.getController(split)`.
+
+#### Review
+
+Acknowledged. We do some of these already (will add the remainder) automatically in the launchpad UI during the cluster confirmation phase by the node operator. We will also add it in markdown to the repo.
diff --git a/docs/versioned_docs/version-v0.19.2/sec/threat_model.md b/docs/versioned_docs/version-v0.19.2/sec/threat_model.md
new file mode 100644
index 0000000000..db9b4bff15
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/sec/threat_model.md
@@ -0,0 +1,155 @@
+---
+sidebar_position: 6
+description: Threat model for a Distributed Validator
+---
+
+# Charon threat model
+
+This page outlines a threat model for Charon, in the context of it being a Distributed Validator middleware for Ethereum validator clients.
+
+## Actors
+
+- Node owner (NO)
+- Cluster node operators (CNO)
+- Rogue node operator (RNO)
+- Outside attacker (OA)
+
+## General observations
+
+This page describes some considerations the Obol core team made about the security of a distributed validator in the context of its deployment and interaction with outside actors.
+
+The goal of this threat model is to provide transparency, but it is by no means a comprehensive audit or complete security reference. It’s a sharing of the experiences and thoughts we gained during the last few years building distributed validator technologies.
+
+While to the Beacon Chain, a distributed validator is seen in much the same way as a regular validator, and thus retains some of the same security considerations, Charon’s threat model is different from a validator client’s threat model because of its general design.
+
+While a validator client owns and operates on a set of validator private keys, the design of Charon allows its node operators to rarely (if ever) see the complete validator private keys, relying instead on modern cryptography to generate partial private key shares.
+
+An Ethereum distributed validator employs advanced signature primitives such that no operator ever handles the full validator private key in any standard lifecycle step: the [BLS digital signature scheme](https://en.wikipedia.org/wiki/BLS_digital_signature) employed by the Ethereum network allows distributed validators to individually sign a blob of data and then aggregate the resulting signatures in a transparent manner, never requiring any of the participating parties to know the full private key to do so.
+
+If the subset of the available Charon nodes is lower than a given threshold, the cluster is not able to continue with its duties.
+
+Given the collaborative nature of a Distributed Validator cluster, every operator must prioritize the liveliness and well-being of the cluster. Charon, at the moment of writing this page cannot reward and penalize operators within a cluster independently.
+
+This implies that Charon’s threat model can’t quite be equated to that of a single validator client, since they work on a different - albeit similar - set of security concepts.
+
+## Identity private key
+
+A distributed validator cluster is made up of a number of nodes, often run by a number of independent operators. For each DV cluster there’s a set of Ethereum validator private keys on which they want to validate on behalf of.
+
+Alongside those, each node (henceforth ‘operator’) holds an SECP256K1 identity private key, referred to as an ENR, that identifies their node to the other cluster operators’ nodes.
+
+Exfiltration of said private key could lead to possible impersonation from an outside attacker, possibly leading to intra-cluster peering issues, eclipse attack risks, and degraded validator performance.
+
+Charon client communication is handled via BFT consensus, which is able to tolerate a given number of misbehaving nodes up to a certain threshold: once this threshold is reached, the cluster is not able to continue with its lifecycle and loses liveness guarantees (the validator goes offline). If more than two-thirds of nodes in a cluster are malicious, a cluster also loses safety guarantees (enough bad actors could collude to come to consensus on something slashable).
+
+Identity private key theft and the subsequent execution of a rogue cluster node is equivalent in the context of BFT consensus to a misbehaving node, hence the cluster can survive and continue with its duties up to what’s specified by the cluster’s BFT protocol’s parameters.
+
+The likelihood of this happening is low: an OA with enough knowledge of the topology of the operator’s network must steal `fault tolerance of the cluster + 1` identity private keys and run Charon nodes to subvert the distributed validator BFT consensus to push the validator offline.
+
+## Ethereum validator private key access
+
+A distributed validator cluster executes Ethereum validator duties by acting as a middleman between the beacon chain and a validator client.
+
+To do so, the cluster must have knowledge of the Ethereum validator’s private key.
+
+The design and implementation of Charon minimizes the chances of this by splitting the Ethereum validator private keys into parts, which are then assigned to each node operator.
+A [distributed key generation](https://en.wikipedia.org/wiki/Distributed_key_generation) (DKG) process is used in order to evenly and safely create the private key shares without any central party having access to the full private key.
+
+The cryptography primitives employed in Charon can allow a threshold of the node operator’s private key shares to be reconstructed into the whole validator private key if needed.
+
+While the facilities to do this are present in the form of CLI commands, as stated before Charon never reconstructs the key in normal operations since BLS digital signature system allows for signature aggregation.
+
+A distributed validator cluster can be started in two ways:
+
+1. An existing Ethereum validator private key is split by the private key holder, and distributed in a trusted manner among the operators.
+2. The operators participate in a distributed key generation (DKG) process, to create private key shares that collectively can be used to sign validation duties as an Ethereum distributed validator. The full private key for the cluster never exists in one place during or after the DKG.
+
+In case 1, one of the node operators K has direct access to the Ethereum validator key and is tasked with the generation of other operator’s identity keys and key shards.
+
+It is clear that in this case the entirety of the sensitive material set is as secure as K’s environment; if K is compromised or malicious, the distributed validator could be slashed.
+
+Case 2 is different, because there’s no pre-existing Ethereum validator key in a single operator's hands: it will be generated using the FROST DKG algorithm.
+
+Assuming a successful DKG process, each operator will only ever handle its own key shares instead of the full Ethereum validator private key.
+
+A set of rogue operators composed of enough members to reconstruct the original Ethereum private keys might pose the risk of slashing for a distributed validator by colluding to produce slashable messages together.
+
+We deem this scenario’s likelihood as low, as it would mean that node operators decided to willfully slash the stake that they should be being rewarded for staking.
+
+Still, in the context of an outside attack, purposefully slashing a validator would mean stealing multiple operator key shares, which in turn means violating many cluster operator’s security almost at the same time. This scenario may occur if there is a 0-day vulnerability in a piece of software they all run or in case of node misconfiguration.
+
+## Rogue node operator
+
+Nodes are connected by means of either relay nodes, or directly to one another.
+
+Each node operator is at risk of being impeded by other nodes or by the relay operator in the execution of their duties.
+
+Nodes need to expose a set of TCP ports to be able to work, and the mere fact of doing that opens up the opportunity for rogue parties to execute DDoS attacks.
+
+Another attack surface for the cluster exists in rogue nodes purposefully filling the various inter-state databases with meaningless data, or more generally submitting bogus information to the other parties to slow down the processing or, in the case of a sybil attack, bring the cluster to a halt.
+
+The likelihood of this scenario is medium, because there’s no active threat hunting part: there’s no need for the rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle.
+
+## Outside attackers interfering with a cluster
+
+There are two levels of sophistication in an OA:
+
+1. No knowledge of the topology of the cluster: The attacker doesn’t know where each cluster node is located and so can’t force fault tolerance +1 nodes offline if it can’t find them.
+2. Knowledge of the topology of the network (or part of it) is possessed: the OA can operate DDoS attacks or try breaking into node’s servers - at that point, the “rogue node operator” scenario applies.
+
+The likelihood of this scenario is low: an OA needs extensive capabilities and sufficient incentive to be able to carry out an attack of this size.
+
+An outside attacker could also find and use vulnerabilities in the underlying cryptosystems and cryptography libraries used by Charon and other Ethereum clients. Forging signatures that fool Charon’s cryptographic library or other dependencies may be feasible, but forging signatures or otherwise finding a vulnerability in either the SECP256K1+ECDSA or BLS12-381+BLS cryptosystems we deem to be a low likelihood risk.
+
+## Malicious beacon nodes
+
+A malicious beacon node (BN) could prevent the distributed validator from operating its validation duties, and could plausibly increase the likelihood of slashing by serving charon illegitimate information.
+
+If the amount of nodes configured with the malicious BN are equal to the byzantine threshold for the Charon BFT consensus protocol, the validation process can potentially halt since the BFT parameter threshold is reached - most of the nodes are byzantine - the system will reach consensus on a set of data that isn’t valid.
+
+We deem the likelihood of this scenario to be medium depending on the trust model associated with the BNs deployment (cloud, self-hosted, SaaS product): node operators should always host or at least trust their own beacon nodes.
+
+## Malicious charon relays
+
+A Charon relay is used as a communication bridge between nodes that aren’t directly exposed on the Internet. It also acts as the peer discovery mechanism for a cluster.
+
+Once a peer’s IP address has been discovered via the relay, a direct connection can be attempted. Nodes can either communicate by exchanging data through a relay, or by using the relay as a means to establish a direct TCP connection to one another.
+
+A malicious relay owned by a OA could lead to:
+
+- Network topology discovery, facilitating the “outside attackers interactions with a cluster” scenario
+- Impeding node communication, potentially impacting the BFT consensus protocol liveness (not security) and distributed validator duties
+- DKG process disruption leading to frustration and potential abandonment by node operators: could lead to the usage of a standard Ethereum validator setup, which implies weaker security overall
+
+We note that BFT consensus liveness disruption can only happen if the number of nodes using the malicious relay for communication is equal to the byzantine nodes amount defined in the consensus parameters.
+
+This risk can be mitigated by configuring nodes with multiple relay URLs from only [trusted entities](../advanced/self-relay.md).
+
+The likelihood of this scenario is medium: Charon nodes are configured with a default set of relay nodes, so if an OA were to compromise those, it would lead to many cluster topologies getting discovered and potentially attacked and disrupted.
+
+## Compromised runtime files
+
+Charon operates with two runtime files:
+
+- A lock file used to address operator’s nodes, define the Ethereum validator public keys and the public key shares associated with it
+- A cluster definition file used to define the operator’s addresses and identities during the DKG process
+
+The lock file is signed and validated by all the nodes participating in the cluster: assuming good security practices on the node operator side, and no bugs in Charon or its dependencies’ implementations, this scenario is unlikely.
+
+If one or more node operators are using less than ideal security practices an OA could rewire the Charon CLI flags to include the `--no-verify` flags, which disables lock file signature and hash verification (usually intended only for development purposes).
+
+By doing that, the OA can edit the lock file as it sees fit, leading to the “rogue node operator” scenario. An OA or RNO might also manage to social engineer their way into convincing other operators into running their malicious lock file with verification disabled.
+
+The likelihood of this scenario is low: an OA would need to compromise every node operator through social engineering to both use a different set of files, and to run its cluster with `--no-verify`.
+
+## Conclusions
+
+Distributed Validator Technology (DVT) helps maintain a high-assurance environment for Ethereum validators by leveraging modern cryptography to ensure no single point of failure is easily found in the system.
+
+As with any computing system, security considerations are to be expected in order to keep the environment safe.
+
+From the point of view of an Ethereum validator entity, running their services with a DV client can help greatly with availability, minimizing slashing risks, and maximizing participation in the network.
+
+On the other hand, one must take into consideration the risks involved with dishonest cluster operators, as well as rogue third-party beacon nodes or relay providers.
+
+In the end, we believe the benefits of DVT greatly outweigh the potential threats described in this overview.
diff --git a/docs/versioned_docs/version-v0.19.2/start/README.md b/docs/versioned_docs/version-v0.19.2/start/README.md
new file mode 100644
index 0000000000..9952b96485
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/start/README.md
@@ -0,0 +1,2 @@
+# start
+
diff --git a/docs/versioned_docs/version-v0.19.2/start/activate-dv.md b/docs/versioned_docs/version-v0.19.2/start/activate-dv.md
new file mode 100644
index 0000000000..6c2bffd366
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/start/activate-dv.md
@@ -0,0 +1,41 @@
+---
+sidebar_position: 5
+description: Activate the Distributed Validator using the deposit contract
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Activate a DV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+If you have successfully created a distributed validator and you are ready to activate it, congratulations! 🎉
+
+Once you have connected all of your charon clients together, synced all of your ethereum nodes such that the monitoring indicates that they are all healthy and ready to operate, **ONE operator** may proceed to deposit and activate the validator(s).
+
+The `deposit-data.json` to be used to deposit will be located in each operator's `.charon` folder. The copies across every node should be identical and any of them can be uploaded.
+
+:::warning
+If you are being given a `deposit-data.json` file that you didn't generate yourself, please take extreme care to ensure this operator has not given you a malicious `deposit-data.json` file that is not the one you expect. Cross reference the files from multiple operators if there is any doubt. Activating the wrong validator or an invalid deposit could result in complete theft or loss of funds.
+:::
+
+Use any of the following tools to deposit. Please use the third-party tools at your own risk and always double check the staking contract address.
+
+* Obol Distributed Validator Launchpad
+* ethereum.org Staking Launchpad
+* From a SAFE Multisig:
+(Repeat these steps for every validator to deposit in your cluster)
+ * From the SAFE UI, click on New Transaction
then Transaction Builder
to create a new custom transaction
+ * Enter the beacon chain contract for Deposit on mainnet - you can find it here
+ * Fill the transaction information
+ * Set amount to 32
in ETH
+ * Use your deposit-data.json
to fill the required data : pubkey
,withdrawal credentials
,signature
,deposit_data_root
. Make sure to prefix the input with 0x
to format them in bytes
+ * Click on Add transaction
+ * Click on Create Batch
+ * Click on Send Batch
, you can click on Simulate to check if the transaction will execute successfully
+ * Get the minimum threshold of signatures from the other addresses and execute the custom transaction
+
+The activation process can take a minimum of 16 hours, with the maximum time to activation being dictated by the length of the activation queue, which can be weeks.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.19.2/start/quickstart-exit.md b/docs/versioned_docs/version-v0.19.2/start/quickstart-exit.md
new file mode 100644
index 0000000000..d815b2e191
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/start/quickstart-exit.md
@@ -0,0 +1,261 @@
+---
+sidebar_position: 7
+description: Exit a validator
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+import CodeBlock from '@theme/CodeBlock';
+
+# Exit a DV
+
+:::warning
+Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf).
+:::
+
+Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas. In the case of a DV, each charon node needs to broadcast a partial exit to the other nodes of the cluster. Once a threshold of partial exits has been received by any node, the full voluntary exit will be sent to the beacon chain.
+
+This process will take 27 hours or longer depending on the current length of the exit queue.
+
+:::info
+
+- A threshold of operators needs to run the exit command for the exit to succeed.
+- If a charon client restarts after the exit command is run but before the threshold is reached, it will lose the partial exits it has received from the other nodes. If all charon clients restart and thus all partial exits are lost before the required threshold of exit messages are received, operators will have to rebroadcast their partial exit messages.
+ :::
+
+## Run the `voluntary-exit` command on your validator client
+
+Run the appropriate command on your validator client to broadcast an exit message from your validator client to its upstream charon client.
+
+It needs to be the validator client that is connected to your charon client taking part in the DV, as you are only signing a partial exit message, with a partial private key share, which your charon client will combine with the other partial exit messages from the other operators.
+
+:::info
+
+- All operators need to use the same `EXIT_EPOCH` for the exit to be successful. Assuming you want to exit as soon as possible, the default epoch of `162304` included in the below commands should be sufficient.
+- Partial exits can be broadcasted by any validator client as long as the sum reaches the threshold for the cluster.
+ :::
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=162304`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=162304
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=162304 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=162304
: The epoch upon which to submit the voluntary exit.
+ --network=goerli
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=162304 --network=goerli --yes'`}
+
+
+
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=256`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=256
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=256 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=256
: The epoch upon which to submit the voluntary exit.
+ --network=holesky
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=256 --network=holesky --yes'`}
+
+
+
+
+
+
+
+
+
+
+ {String.raw`docker exec -ti charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=194048`}
+
+
+
+
+ The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path /home/user/data/charon
to the newly created /home/user/data/wd
directory.
+
+ For each file in the /home/user/data/wd/secrets
directory, it:
+ Extracts the filename without the extension as the file name is the public key
+ Appends {String.raw`--validator=`}
to the command
variable.
+ Executes a program called nimbus_beacon_node
with the following arguments:
+
+ deposits exit
: Exits validators
+ $command
: The generated command string from the loop.
+ --epoch=194048
: The epoch upon which to submit the voluntary exit.
+ --rest-url=http://charon:3600/
: Specifies the Charon host:port
+ --data-dir=/home/user/charon/
: Specifies the Keystore path
which has all the validator keys. There will be a secrets
and a validators
folder inside it.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c '\
+
+ mkdir /home/user/data/wd
+ cp -r /home/user/data/charon/ /home/user/data/wd
+
+ command=""; \
+ for file in /home/user/data/wd/secrets/*; do \
+ filename=$(basename "$file" | cut -d. -f1); \
+ command+=" --validator=$filename"; \
+ done; \
+
+ /home/user/nimbus_beacon_node deposits exit $command --epoch=194048 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
+
+
+
+
+ The following executes an interactive command inside the Loestar VC container to exit all validators. It executes
+ node /usr/app/packages/cli/bin/lodestar validator voluntary-exit
with the arguments:
+
+ --beaconNodes="http://charon:3600"
: Specifies the Charon host:port
.
+ --data-dir=/opt/data
: Specifies the folder where the key stores were imported.
+ --exitEpoch=194048
: The epoch upon which to submit the voluntary exit.
+ --network=mainnet
: Specifies the network.
+ --yes
: Skips confirmation prompt.
+
+
+
+ {String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 /bin/sh -c 'node /usr/app/packages/cli/bin/lodestar validator voluntary-exit --beaconNodes="http://charon:3600" --dataDir=/opt/data --exitEpoch=194048 --network=mainnet --yes'`}
+
+
+
+
+
+
+
+Once a threshold of exit signatures has been received by any single charon client, it will craft a valid voluntary exit message and will submit it to the beacon chain for inclusion. You can monitor partial exits stored by each node in the [Grafana Dashboard](https://github.com/ObolNetwork/charon-distributed-validator-node).
+
+## Exit epoch and withdrawable epoch
+
+The process of a validator exiting from staking takes variable amounts of time, depending on how many others are exiting at the same time.
+
+Immediately upon broadcasting a signed voluntary exit message, the exit epoch and withdrawable epoch values are calculated based off the current epoch number. These values determine exactly when the validator will no longer be required to be online performing validation, and when the validator is eligible for a full withdrawal respectively.
+
+1. Exit epoch - epoch at which your validator is no longer active, no longer earning rewards, and is no longer subject to slashing rules.
+ :::warning
+ Up until this epoch (while "in the queue") your validator is expected to be online and is held to the same slashing rules as always. Do not turn your DV node off until this epoch is reached.
+ :::
+2. Withdrawable epoch - epoch at which your validator funds are eligible for a full withdrawal during the next validator sweep.
+ This occurs 256 epochs after the exit epoch, which takes ~27.3 hours.
+
+## How to verify a validator exit
+
+Consult the examples below and compare them to your validator's monitoring to verify that exits from each operator in the cluster are being received. This example is a cluster of 4 nodes with 2 validators and threshold of 3 nodes broadcasting exits are needed.
+
+1. Operator 1 broadcasts an exit on validator client 1.
+ 
+ 
+2. Operator 2 broadcasts an exit on validator client 2.
+ 
+ 
+3. Operator 3 broadcasts an exit on validator client 3.
+ 
+ 
+
+At this point, the threshold of 3 has been reached and the validator exit process will start. The logs will show the following:
+
+
+:::tip
+Once a validator has broadcasted an exit message, it must continue to validate for at least 27 hours or longer. Do not shut off your distributed validator nodes until your validator is fully exited.
+:::
diff --git a/docs/versioned_docs/version-v0.19.2/start/quickstart_alone.md b/docs/versioned_docs/version-v0.19.2/start/quickstart_alone.md
new file mode 100644
index 0000000000..82a1d0fb2c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/start/quickstart_alone.md
@@ -0,0 +1,159 @@
+---
+sidebar_position: 3
+description: Create a DV alone
+---
+
+# quickstart\_alone
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Create a DV alone
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+:::info It is possible for a single operator to manage all of the nodes of a DV cluster. The nodes can be run on a single machine, which is only suitable for testing, or the nodes can be run on multiple machines, which is expected for a production setup.
+
+The private key shares can be created centrally and distributed securely to each node. Alternatively, the private key shares can be created in a lower-trust manner with a [Distributed Key Generation](../int/key-concepts.md#distributed-validator-key-generation-ceremony) process, which avoids the validator private key being stored in full anywhere, at any point in its lifecycle. Follow the [group quickstart](quickstart_group.md) instead for this latter case. :::
+
+### Pre-requisites
+
+* A basic [knowledge](https://docs.ethstaker.cc/ethstaker-knowledge-base/) of Ethereum nodes and validators.
+* Ensure you have [git](https://git-scm.com/downloads) installed.
+* Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+* Make sure `docker` is running before executing the commands below.
+
+### Step 1: Create the key shares locally
+
+Go to the the [DV Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/docs/dvl/intro/README.md#dv-launchpad-links) and select `Create a distributed validator alone`. Follow the steps to configure your DV cluster. The Launchpad will give you a docker command to create your cluster.\
+Before you run the command, checkout the [Quickstart Alone](https://github.com/ObolNetwork/charon-distributed-validator-cluster.git) demo repo and `cd` into the directory.
+
+```bash
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+# Change directory
+cd charon-distributed-validator-cluster/
+
+# Run the command provided in the DV Launchpad "Create a cluster alone" flow
+docker run -u $(id -u):$(id -g) --rm -v "$(pwd)/:/opt/charon" obolnetwork/charon:v0.19.2 create cluster --definition-file=...
+```
+
+1. Clone the [Quickstart Alone](https://github.com/ObolNetwork/charon-distributed-validator-cluster) demo repo and `cd` into the directory.
+
+```bash
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+# Change directory
+cd charon-distributed-validator-cluster/
+```
+
+2. Run the cluster creation command, setting required flag values.
+
+Run the below command to create the validator private key shares and cluster artifacts locally, replacing the example values for `nodes`, `network`, `num-validators`, `fee-recipient-addresses`, and `withdrawal-addresses`. Check the [Charon CLI reference](../charon/charon-cli-reference.md#create-a-full-cluster-locally) for additional, optional flags to set.
+
+```bash
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.2 create cluster --nodes=4 --network=holesky --num-validators=1 --name="Quickstart Guide Cluster" --cluster-dir="cluster" --fee-recipient-addresses=0x000000000000000000000000000000000000dead --withdrawal-addresses=0x000000000000000000000000000000000000dead
+```
+
+:::tip If you would like your cluster to appear on the [DV Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/dvl/intro/README.md), add the `--publish` flag to the command. :::
+
+\
+
+
+After the `create cluster` command is run, you should have multiple subfolders within the newly created `./cluster/` folder, one for each node created.
+
+**Backup the `./cluster/` folder, then move on to deploying the cluster.**
+
+:::info Make sure your backup is secure and private, someone with access to these files could get the validators slashed. :::
+
+### Step 2: Deploy and start the nodes
+
+:::warning This part of the guide only runs one Execution Client, one Consensus Client, and 6 Distributed Validator Charon Client + Validator Client pairs on a single docker instance, and **is not suitable for a mainnet deployment**. (If this machine fails, there will not be any fault tolerance - the cluster will also fail.)
+
+For a production deployment with fault tolerance, follow the part of the guide instructing you how to distribute the nodes across multiple machines. :::
+
+Run this command to start your cluster containers if you deployed using [CDVC repo](https://github.com/ObolNetwork/charon-distributed-validator-cluster).
+
+```sh
+# Start the distributed validator cluster
+docker compose up --build -d
+```
+
+Check the monitoring dashboard and see if things look all right
+
+```sh
+# Open Grafana
+open http://localhost:3000/d/laEp8vupp
+```
+
+:::warning To distribute your cluster across multiple machines, each node in the cluster needs one of the folders called `node*/` to be copied to it. Each folder should be copied to a CDVN repo and renamed from `node*` to `.charon`.
+
+Right now, the `charon create cluster` command [used earlier to create the private keys](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/start/quickstart_alone/README.md#step-1-create-the-key-shares-locally) outputs a folder structure like `cluster/node*/`. Make sure to grab the `./node*/` folders, _rename_ them to `.charon` and then move them to one of the single node repos below. Once all nodes are online, synced, and connected, you will be ready to activate your validator. :::
+
+This is necessary for the folder to be found by the default `charon run` command. Optionally, it is possible to override `charon run`'s default file locations by using `charon run --private-key-file="node0/charon-enr-private-key" --lock-file="node0/cluster-lock.json"` for each instance of charon you start (substituting `node0` for each node number in your cluster as needed).
+
+:point\_right: Use the single node [docker compose](https://github.com/ObolNetwork/charon-distributed-validator-node), the kubernetes [manifests](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node), or the [helm chart](https://github.com/ObolNetwork/helm-charts) example repos to get your nodes up and connected after loading the `.charon` folder artifacts into them appropriately.\
+
+
+```log
+
+cluster
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ └── keystore-0.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ └── keystore-0.txt
+ ├── keystore-N.json
+ └── keystore-N.txt
+
+```
+
+```log
+└── .charon
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── ...
+ ├── keystore-N.json
+ └── keystore-N.txt
+```
+
+:::info Currently, the quickstart repo installs a node on the Holesky testnet. It is possible to choose a different network (another testnet, or mainnet) by overriding the `.env` file.
+
+`.env.sample` is a sample environment file that allows overriding default configuration defined in `docker-compose.yml`. Uncomment and set any variable to override its value.
+
+Setup the desired inputs for the DV, including the network you wish to operate on. Check the [Charon CLI reference](../charon/charon-cli-reference.md) for additional optional flags to set. Once you have set the values you wish to use. Make a copy of this file called `.env`.
+
+```bash
+# Copy ".env.sample", renaming it ".env"
+cp .env.sample .env
+```
+
+:::
diff --git a/docs/versioned_docs/version-v0.19.2/start/quickstart_group.md b/docs/versioned_docs/version-v0.19.2/start/quickstart_group.md
new file mode 100644
index 0000000000..cede9d0d6c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/start/quickstart_group.md
@@ -0,0 +1,269 @@
+---
+sidebar_position: 4
+description: Create a DV with a group
+---
+
+# quickstart\_group
+
+import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem";
+
+## Create a DV with a group
+
+:::warning Charon is in a beta state and should be used with caution according to its [Terms of Use](https://obol.tech/terms.pdf). :::
+
+This quickstart guide will walk you through creating a Distributed Validator Cluster with a number of other node operators.
+
+### Pre-requisites
+
+* A basic{" "} [knowledge ](https://docs.ethstaker.cc/ethstaker-knowledge-base/){" "} of Ethereum nodes and validators.
+* Ensure you have{" "} [git ](https://git-scm.com/downloads){" "} installed.
+* Ensure you have{" "} [docker ](https://docs.docker.com/engine/install/){" "} installed.{" "}
+* Make sure `docker` is running before executing the commands below.
+
+\
+
+
+### Step 1: Generate an ENR
+
+In order to prepare for a distributed key generation ceremony, you need to create an ENR for your charon client. This ENR is a public/private key pair that allows the other charon clients in the DKG to identify and connect to your node. If you are creating a cluster but not taking part as a node operator in it, you can skip this step.
+
+```bash
+# Clone the repo
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+# Change directory
+cd charon-distributed-validator-node/
+
+# Use docker to create an ENR. Backup the file `.charon/charon-enr-private-key`.
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.2 create enr
+```
+
+You should expect to see a console output like this:
+
+```
+Created ENR private key: .charon/charon-enr-private-key
+enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u
+```
+
+:::warning Please make sure to create a backup of the private key at `.charon/charon-enr-private-key`. Be careful not to commit it to git! **If you lose this file you won't be able to take part in the DKG ceremony nor start the DV cluster successfully.** :::
+
+:::tip If instead of being shown your `enr` you see an error saying `permission denied` then you may need to [update your docker permissions](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.19.2/faq/errors.mdx#docker-permission-denied-error) to allow the command to run successfully. :::
+
+For the next step, select the _Creator_ tab if you are coordinating the creation of the cluster. (This role holds no position of privilege in the cluster, it only sets the initial terms of the cluster that the other operators agree to.) Select the _Operator_ tab if you are accepting an invitation to operate a node in a cluster proposed by the cluster creator.
+
+### Step 2: Create a cluster or accept an invitation to a cluster
+
+### Collect addresses, configure the cluster, share the invitation
+
+Before starting the cluster creation process, you will need to collect an Ethereum address for each operator in the cluster. They will need to be able to sign messages through MetaMask with this address. _(Broader wallet support will be added in future.)_ With these addresses in hand, go through the cluster creation flow.
+
+You will use the Launchpad to create an invitation, and share it with the operators.\
+This video shows the flow within the{" "} [DV Launchpad ](https://github.com/ObolNetwork/obol-docs/blob/main/docs/dvl/intro/README.md#dv-launchpad-links):
+
+The following are the steps for creating a cluster.
+
+1. Go to the{" "} [DV Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/docs/dvl/intro/README.md#dv-launchpad-links)
+2. Connect your wallet 
+3. Select `Create a Cluster with a group` then{" "} `Get Started`. 
+4. Follow the flow and accept the advisories.
+5. Configure the Cluster
+6.
+ * Input the `Cluster Name` & `Cluster Size`{" "} (i.e. number of operators in the cluster). The threshold will update automatically, it shows the number of nodes that need to be functioning for the validator(s) to stay active.
+7. Input the Ethereum addresses for each operator that you collected previously. If you will be taking part as an operator, click the "Use My Address" button for Operator 1.
+8.
+ * Select the desired amount of validators (32 ETH each) the cluster will run. (Note that the mainnet launchpad is restricted to one validator for now.)
+ * If you are taking part in the cluster, enter the ENR you generated in [step one](quickstart_group.md#step-1-generate-an-enr) in the "What is your charon client's ENR?" field.
+ * Enter the `Principal address` which should receive the principal 32 ETH and the accrued consensus layer rewards when the validator is exited. This can optionally be set to the contract address of a multisig / splitter contract.
+ * Enter the `Fee Recipient address` to which the execution layer rewards will go. This can be the same as the principal address, or it can be a different address. This can optionally be set to the contract address of a multisig / splitter contract.
+9. Click `Create Cluster Configuration`. Review that all the details are correct, and press `Confirm and Sign`{" "} You will be prompted to sign two or three transactions with your MetaMask wallet. These are:
+10.
+ * The `config_hash`. This is a hashed representation of the details of this cluster, to ensure everyone is agreeing to an identical setup.
+ * The `operator_config_hash`. This is your acceptance of the terms and conditions of participating as a node operator.
+ * Your `ENR`. Signing your ENR authorises the corresponding private key to act on your behalf in the cluster.
+11. Share your cluster invite link with the operators. Following the link will show you a screen waiting for other operators to accept the configuration you created. 
+12. You can use the link to monitor how many of the operators have already signed their approval of the cluster configuration and submitted their ENR.
+
+You will use the CLI to create the cluster definition file, which you will distribute it to the operators manually.
+
+1. The leader or creator of the cluster will prepare the `cluster-definition.json` file for the Distributed Key Generation ceremony using the `charon create dkg` command.
+2. Populate the `charon create dkg` command with the appropriate flags including the `name`, the{" "} `num-validators`, the{" "} `fee-recipient-addresses`, the{" "} `withdrawal-addresses`, and the{" "} `operator-enrs` of all the operators participating in the cluster.
+3. Run the `charon create dkg` command that generates DKG cluster-definition.json file. (Note: in the "docker run" command, you may have to change the version from v0.19.2 to the correct version of the repo you are using)
+
+ ```
+ docker run --rm -v "$(pwd):/opt/charon"
+ obolnetwork/charon:v0.19.2 create dkg --name="Quickstart"
+ --num-validators=1
+ --fee-recipient-addresses="0x0000000000000000000000000000000000000000"
+ --withdrawal-addresses="0x0000000000000000000000000000000000000000"
+ --operator-enrs="enr:-JG4QGQpV4qYe32QFUAbY1UyGNtNcrVMip83cvJRhw1brMslPeyELIz3q6dsZ7GblVaCjL_8FKQhF6Syg-O_kIWztimGAYHY5EvPgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQKzMe_GFPpSqtnYl-mJr8uZAUtmkqccsAx7ojGmFy-FY4N0Y3CCDhqDdWRwgg4u"
+
+ ```
+
+ This command should output a file at `.charon/cluster-definition.json` This file needs to be shared with the other operators in a cluster.
+
+ * The `.charon` folder is hidden by default. To view it, run `ls -al .charon` in your terminal. Else, if you are on `macOS`, press{" "} `Cmd + Shift + .` to view all hidden files in the finder application.
+
+### Join the cluster prepared by the creator
+
+Use the Launchpad or CLI to join the cluster configuration generated by the creator: Your cluster creator needs to configure the cluster, and send you an invite URL link to join the cluster on the Launchpad. Once you've received the Launchpad invite link, you can begin the cluster acceptance process.
+
+1. Click on the DV launchpad link provided by the leader or creator. Make sure you recognise the domain and the person sending you the link, to ensure you are not being phished.
+2. Connect your wallet using the Ethereum address provided to the leader. 
+3. Review the operators addresses submitted and click `Get Started` to continue. 
+4. Review and accept the DV Launchpad terms & conditions and advisories.
+5. Review the cluster configuration set by the creator and add your `ENR` that you generated in [step 1](quickstart_group.md#step-1-generate-an-enr).
+6. Sign the two transactions with your wallet, these are:
+ * The config hash. This is a hashed representation of all of the details for this cluster.
+ * Your own `ENR` This signature authorises the key represented by this ENR to act on your behalf in the cluster.
+7. Wait for all the other operators in your cluster to also finish these steps.
+
+You'll receive the `cluster-definition.json` file created by the leader/creator. You should save it in the `.charon/`{" "} folder that was created initially. (Alternatively, you can use the{" "} `--definition-file` flag to override the default expected location for this file.)
+
+Once every participating operator is ready, the next step is the distributed key generation amongst the operators.
+
+* If you are not planning on operating a node, and were only configuring the cluster for the operators, your journey ends here. Well done!
+* If you are one of the cluster operators, continue to the next step.
+
+### Step 3: Run the Distributed Key Generation (DKG) ceremony
+
+:::tip For the [DKG](../charon/dkg.md) to complete, all operators need to be running the command simultaneously. It helps if operators can agreed on a certain time or schedule a video call for them to all run the command together. :::
+
+1. Once all operators successfully signed, your screen will automatically advance to the next step and look like this. Click `Continue`. (If you closed the tab, you can always go back to the invite link shared by the leader and connect your wallet.)
+
+
+
+2. Copy and run the `docker` command on the screen into your terminal. It will retrieve the remote cluster details and begin the DKG process.
+
+ 
+3. Assuming the DKG is successful, a number of artefacts will be created in the `.charon` folder of the node. These include:
+ * A `deposit-data.json` file. This contains the information needed to activate the validator on the Ethereum network.
+ * A `cluster-lock.json` file. This contains the information needed by charon to operate the distributed validator cluster with its peers.
+ * A `validator_keys/` folder. This folder contains the private key shares and passwords for the created distributed validators.
+
+Once the creator gives you the `cluster-definition.json` file and you place it in a `.charon` subdirectory, run:
+
+```
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.19.2 dkg --publish
+```
+
+and the DKG process should begin.
+
+:::warning Please make sure to create a backup of your `.charon/` folder. **If you lose your private keys you won't be able to start the DV cluster successfully and may risk your validator deposit becoming unrecoverable.** Ensure every operator has their `.charon` folder securely and privately backed up before activating any validators. :::
+
+:::info The `cluster-lock` and `deposit-data` files are identical for each operator, if lost, they can be copied from one operator to another. :::
+
+Now that the DKG has been completed, all operators can start their nodes.
+
+### Step 4: Start your Distributed Validator Node
+
+With the DKG ceremony over, the last phase before activation is to prepare your node for validating over the long term.
+
+The quickstart [repository](https://github.com/ObolNetwork/charon-distributed-validator-node) is configured to sync an execution layer client (`Nethermind`) and a consensus layer client (`Lighthouse`). You can also leverage alternative ways to run a node such as Ansible, Helm, or Kubernetes manifests.
+
+:::info Currently, the quickstart [repo](https://github.com/ObolNetwork/charon-distributed-validator-node) configures a node for the Holesky testnet. It is possible to choose a different network (another testnet, or mainnet) by overriding the `.env` file. From within the `charon-distributed-validator-node` directory:
+
+`.env.sample` is a sample environment file that allows overriding default configuration defined in `docker-compose.yml`. Uncomment and set any variable to override its value.
+
+Setup the desired inputs for the DV, including the network you wish to operate on. Check the [Charon CLI reference](../charon/charon-cli-reference.md) for additional optional flags to set.
+
+```bash
+# Copy ".env.sample", renaming it ".env"
+cp .env.sample .env
+```
+
+:::
+
+:::warning If you manually update `docker compose` to mount `lighthouse` from your locally synced `~/.lighthouse`, the whole chain database may get deleted. It'd be best not to manually update as `lighthouse` checkpoint-syncs so the syncing doesn't take much time.\
+
+
+**Note**: If you have a `nethermind` node already synced, you can simply copy over the directory. For ex: `cp -r ~/.ethereum/goerli data/nethermind`. This makes everything faster since you start from a synced nethermind node. :::
+
+```bash
+# Delete lighthouse data if it exists
+rm -r ./data/lighthouse
+
+# Spin up a Distributed Validator Node with a Validator Client
+docker compose up -d
+
+```
+
+If at any point you need to turn off your node, you can run:
+
+```bash
+# Shut down the currently running distributed validator node
+docker compose down
+```
+
+You should use the grafana dashboard that accompanies the quickstart repo to see whether your cluster is healthy.
+
+```bash
+# Open Grafana dashboard
+open http://localhost:3000/d/singlenode/
+```
+
+In particular you should check:
+
+* That your charon client can connect to the configured beacon client.
+* That your charon client can connect to all peers directly.
+* That your validator client is connected to charon, and has the private keys it needs loaded and accessible.
+
+Most components in the dashboard have some help text there to assist you in understanding your cluster performance.
+
+You might notice that there are logs indicating that a validator cannot be found and that APIs are returning 404. This is to be expected at this point, as the validator public keys listed in the lock file have not been deposited and acknowledged on the consensus layer yet (usually \~16 hours after the deposit is made).
+
+Use an ansible playbook to start your node. [See the repo here](https://github.com/ObolNetwork/obol-ansible) for further instructions. Use a Helm to start your node. [See the repo here](https://github.com/ObolNetwork/helm-charts) for further instructions. Use Kubernetes manifests to start your charon client and validator client. These manifests expect an existing Beacon Node Endpoint to connect to. [See the repo here](https://github.com/ObolNetwork/charon-k8s-distributed-validator-node) for further instructions.
+
+**Using a pre-existing beacon node**
+
+:::warning Using a remote beacon node will impact the performance of your Distributed Validator and should be used sparingly. :::
+
+If you already have a beacon node running somewhere and you want to use that instead of running an EL (`nethermind`) & CL (`lighthouse`) as part of the example repo, you can disable these images. To do so, follow these steps:
+
+1. Copy the `docker-compose.override.yml.sample` file
+
+```
+cp -n docker-compose.override.yml.sample docker-compose.override.yml
+```
+
+2. Uncomment the `profiles: [disable]` section for both `nethermind` and `lighthouse`. The override file should now look like this
+
+```
+services:
+ nethermind:
+ # Disable nethermind
+ profiles: [disable]
+ # Bind nethermind internal ports to host ports
+ #ports:
+ #- 8545:8545 # JSON-RPC
+ #- 8551:8551 # AUTH-RPC
+ #- 6060:6060 # Metrics
+
+ lighthouse:
+ # Disable lighthouse
+ profiles: [disable]
+ # Bind lighthouse internal ports to host ports
+ #ports:
+ #- 5052:5052 # HTTP
+ #- 5054:5054 # Metrics
+...
+```
+
+3. Then, uncomment and set the `CHARON_BEACON_NODE_ENDPOINTS` variable in the `.env` file to your beacon node's URL
+
+```
+...
+# Connect to one or more external beacon nodes. Use a comma separated list excluding spaces.
+CHARON_BEACON_NODE_ENDPOINTS=
+...
+```
+
+4. Restart your docker compose
+
+```
+docker compose down
+docker compose up -d
+```
+
+:::tip In a Distributed Validator Cluster, it is important to have a low latency connection to your peers. Charon clients will use the NAT protocol to attempt to establish a direct connection to one another automatically. If this doesn't happen, you should port forward charon's p2p port to the public internet to facilitate direct connections. (The default port to expose is `:3610`). Read more about charon's networking [here](../charon/networking.md). :::
+
+If you have gotten to this stage, every node is up, synced and connected, congratulations. You can now move forward to activating your validator to begin staking.
diff --git a/docs/versioned_docs/version-v0.19.2/start/quickstart_overview.md b/docs/versioned_docs/version-v0.19.2/start/quickstart_overview.md
new file mode 100644
index 0000000000..e139b21b1f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/start/quickstart_overview.md
@@ -0,0 +1,19 @@
+---
+sidebar_position: 1
+description: Quickstart Overview
+---
+
+# Quickstart Overview
+
+The quickstart guides are aimed at developers and stakers looking to utilize Distributed Validators for solo or multi-operator staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+There are two ways to set up a distributed validator and each comes with its own quickstart, within the "Getting Started" section:
+1. Run a DV cluster as a [**group**](./quickstart_group.md), where several operators run the nodes that make up the cluster. In this setup, the key shares are created using a distributed key generation process, avoiding the full private keys being stored in full in any one place.
+This approach can also be used by single operators looking to manage all nodes of a cluster but wanting to create the key shares in a trust-minimised fashion.
+
+2. Run a DV cluster [**alone**](./quickstart_alone.md), where a single operator runs all the nodes of the DV. Depending on trust assumptions, there is not necessarily the need to create the key shares via a DKG process. Instead the key shares can be created in a centralised manner, and distributed securely to the nodes.
+
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.19.2/start/update.md b/docs/versioned_docs/version-v0.19.2/start/update.md
new file mode 100644
index 0000000000..2cb7acae51
--- /dev/null
+++ b/docs/versioned_docs/version-v0.19.2/start/update.md
@@ -0,0 +1,76 @@
+---
+sidebar_position: 6
+description: Update your DV cluster with the latest Charon release
+---
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Update a DV
+
+It is highly recommended to upgrade your DV stack from time to time. This ensures that your node is secure, performant, up-to-date and you don't miss important hard forks.
+
+To do this, follow these steps:
+
+### Navigate to the node directory
+
+
+
+
+
+ cd charon-distributed-validator-node
+
+
+
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+
+
+
+### Pull latest changes to the repo
+```
+git pull
+```
+
+### Create (or recreate) your DV stack
+```
+docker compose up -d --build
+```
+:::warning
+If you run more than one node in a DV Cluster, please take caution upgrading them simultaneously. Particularly if you are updating or changing the validator client used or recreating disks. It is recommended to update nodes on a sequential basis to minimse liveness and safety risks.
+:::
+
+### Conflicts
+
+:::info
+You may get a `git conflict` error similar to this:
+:::
+```markdown
+error: Your local changes to the following files would be overwritten by merge:
+prometheus/prometheus.yml
+...
+Please commit your changes or stash them before you merge.
+```
+This is probably because you have made some changes to some of the files, for example to the `prometheus/prometheus.yml` file.
+
+To resolve this error, you can either:
+
+- Stash and reapply changes if you want to keep your custom changes:
+ ```
+ git stash # Stash your local changes
+ git pull # Pull the latest changes
+ git stash apply # Reapply your changes from the stash
+ ```
+ After reapplying your changes, manually resolve any conflicts that may arise between your changes and the pulled changes using a text editor or Git's conflict resolution tools.
+
+- Override changes and recreate configuration if you don't need to preserve your local changes and want to discard them entirely:
+ ```
+ git reset --hard # Discard all local changes and override with the pulled changes
+ docker-compose up -d --build # Recreate your DV stack
+ ```
+ After overriding the changes, you will need to recreate your DV stack using the updated files.
+ By following one of these approaches, you should be able to handle Git conflicts when pulling the latest changes to your repository, either preserving your changes or overriding them as per your requirements.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.3.0/README.md b/docs/versioned_docs/version-v0.3.0/README.md
new file mode 100644
index 0000000000..42cea5ac40
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.3.0
+
diff --git a/docs/versioned_docs/version-v0.3.0/cg/README.md b/docs/versioned_docs/version-v0.3.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.3.0/cg/bug-report.md b/docs/versioned_docs/version-v0.3.0/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/website/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/ the may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.3.0/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.3.0/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..451567693d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dv/01_introducing-charon.md
@@ -0,0 +1,27 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon DVT middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.3.0/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run ghcr.io/obolnetwork/charon:v0.3.0 --help
+```
diff --git a/docs/versioned_docs/version-v0.3.0/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.3.0/dv/02_validator-creation.md
new file mode 100644
index 0000000000..a7ffedfa8b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dv/02_validator-creation.md
@@ -0,0 +1,32 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a `cluster configuration`.
+ * This operator also sets their charon client's Ethereum Node Record (ENR).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * The submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster manifest file.
+4. Every operator loads this cluster manifest file into `charon dkg`. The manifest provides the charon process with the information it needs to complete the DKG ceremony with the other charon clients.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A manifest lockfile, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated threshold verifiers. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator exit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares and manifest.lock file.
+7. All operators load the keys and manifests generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.3.0/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.3.0/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..4b20142a74
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dv/04_middleware-daemon.md
@@ -0,0 +1,34 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware daemon
+
+The Charon daemon serves as a consensus layer API middleware and connects to the Obol peer-to-peer network to discover it's counterpart Charon nodes.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+A single instance of the middleware can participate in multiple distributed validator clusters. The amount of validators per middleware is bound by risk management and hardware limits (CPU, memory, bandwidth), but there is no hardcoded limit.
+
+The daemon offers a config reload instruction through Unix signals which is useful to join or leave Obol clusters on-the-fly without interruption.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
+
+### Initialization
+
+An instance of Charon requires the following pieces of information at minimum in order to operate.
+
+- A DV cluster manifest file in the to be confirmed EIP format. This file contains the required information a DV client needs for joining a Distributed Validator Cluster. This file includes:
+ - The total number of shares of the key and the required threshold for reconstruction.
+ - An SECP256K1 key pair in an ENR format for Obol consensus messages, this key is signed by the corresponding operators validator key share to legitimise it.
+ - A list of all ENR public keys of other operators participating in the cluster.
+ - The group public keys representing each distributed validator in the cluster to the Ethereum network.
+- Access to an Ethereum Consensus API
+ - It is recommended to run at least one Ethereum Consensus client for each Charon middleware client.
+ - Any [compliant](https://ethereum.github.io/beacon-APIs/) Beacon node implementation should work – try to establish client diversity.
+ - These consensus clients need to be connected to at least one Ethereum Execution clients for block production.
+- The public IP address and port the charon client will operate on
+ - For now, we make the (naive) assumption that the address will be static.
+ - Charon will attempt to auto-discover its address on first use by enumerating network interfaces and using [STUN](https://datatracker.ietf.org/doc/html/rfc5389).
diff --git a/docs/versioned_docs/version-v0.3.0/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.3.0/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..702eb982f8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dv/06_peer-discovery.md
@@ -0,0 +1,39 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security, middleware clients need to be able to authenticate one another. We achieve this by giving each middleware client something they can use that other clients in the cluster will be able to recognise as legitimate.
+
+At the end of a [DVK generation ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their CLI program/browser based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a distributed validator.
+- **A distributed validator cluster manifest:** This file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+
+### Authenticating a distributed validator client
+
+During the final stage of the DVK ceremony, after the validator key shares are generated for each operator, the ceremony program will generate a random SECP256K1 key pair to be used by a Charon client for its ENR. The program will sign this ENR public key with every keystore this validator will service, to indicate to all other operators that this randomly generated key is directly authorised by the current operator to communicate at the consensus layer for their distributed validator key shares.
+
+This sensitive ENR private key, and the general configuration of the distributed validator cluster will be the outputs of a DVK ceremony known shorthand as a `cluster manifest`.
+
+This manifest file will be made available to a charon client, and the validator key stores will be made available to the configured validator client. When charon starts up and ingests its configuration from the manifest file, it will use the provided keypair for its ENR. If it's configured IP address is different from what is embedded in the ENR to begin with, it reissues the ENR as needed and begins to establish connections with the other operators in the cluster as recorded in the manifest file.
+
+#### Node database
+
+Obol DV clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely.
+
+#### Node discovery
+
+In early versions of Charon, operator nodes will be seeded in the node database from cluster manifest files. Updates to the node database can be made in realtime as a Charon client receives messages from these authorised ENRs containing a higher nonce value than present in our node database, usually representing an IP address update.
+
+In the future, Charon is intended to discover counterparty nodes using the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) protocol. All DVCs connecting to the Obol Network will join a public node discovery peer-to-peer network to find the latest ENR records for their DV peers.
+
+This dynamic discovery serves two purposes:
+
+- Bootstrapping a node database in the event of data loss when peers no longer listen on the IP addresseses specified in the manifest and need to be located.
+- Periodically refreshing ENRs to follow IP address changes, e.g. AWS EC2 IPs or NAT on residential broadband.
diff --git a/docs/versioned_docs/version-v0.3.0/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.3.0/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..1120ca4f97
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dv/07_p2p-interface.md
@@ -0,0 +1,11 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Nodes must have their keys authorized in a [cluster manifest](./08_distributed-validator-cluster-manifest.md) in order for the handshake to succeed.
diff --git a/docs/versioned_docs/version-v0.3.0/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.3.0/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..50581ec4c8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,60 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Distributed validator cluster manifest
+
+:::warning
+This manifest file is a work in progress, and intends to be standardised for operating distributed validators via the [EIP process](../dvk/01_distributed-validator-keys.md#standardising-the-format-of-dvks) when appropriate.
+:::
+
+The manifest file captures the public, read-only info required to take part in a distributed validator cluster.
+
+One manifest can contain a number of distributed validators being operated by the same group of nodes.
+
+The manifest provides at least the following info:
+
+- ENRs for each participating operator
+ - SECP256K1 public keys
+ - Used to identify a DVC client across the internet
+ - Forms the basis of identity between charon nodes
+- Signature(s) from each key share(s) authorising their respective operator ENRs as acting on their behalf
+ - Used to link validator key shares to DVC ENRs, includes a nonce to allow for ENR key rotation
+- An array of distributed validators operated by this cluster
+ - The BLS public key of the Distributed Validator
+ - The TSS verifiers for the group key, from which BLS public keys can be inferred
+
+## Example manifest
+
+```json5 title="manifest.yaml"
+{
+ "version": "obol/charon/manifest/0.0.1",
+ "description": "dv/2/threshold/3/peer/5",
+ "distributed_validators": [
+ {
+ "root_pubkey": "0xaf7e10e176ad2cd634009fed0e906e95866d47ec16808cc4df32b3bcfcaffbad9158f52531a086f6d9c54152dc4250da",
+ "threshold_verifiers": [
+ "r34Q4XatLNY0AJ/tDpBulYZtR+wWgIzE3zKzvPyv+62RWPUlMaCG9tnFQVLcQlDa",
+ "jRadEC0L5vp+sYPUvRgp9b4x/nzN1qGkiFA+lgpwNjq3BiJjhhikMKY8HQ1PJ0R2",
+ "uQHdtolDJjjXXwnQikhBx9T9Hp20fPXqOS4hP3nZORhtPlVCCvP8IggANkq9o7hF"
+ ]
+ },
+ {
+ "root_pubkey": "0x82313d1fc1b7e2e361935b977068434226a0bc1ec3680a35669b63378f0154e419b1daba3531b0068a7af3159e0f56d7",
+ "threshold_verifiers": [
+ "gjE9H8G34uNhk1uXcGhDQiagvB7DaAo1ZptjN48BVOQZsdq6NTGwBop68xWeD1bX",
+ "ud3X5IV2KqOkBMWRLGpWsuyRQkK0shUKp8pascNEQE3Vo2ujVfs0O+3dPPbi8CYm",
+ "hQl8KQo8usksZjaE04L6vJRPXrv2k7h542adcK8Ibwhvci1hWwppBc54VvKG9VfL"
+ ]
+ }
+ ],
+ "peers": [
+ "enr:-Ie4QFdd7auMcA6Sht4fD5alWeChra30HTW0eIOr6XkYQtivD2Ev1HNdkyhFnT5LcVKB-2aROv2wAW5EJW5NKLx_fUiAgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQK1OEYKGHj2rkQflwpMENJhr9_AAVIMdgRjp-D7dPVUVIN0Y3ABg3VkcAI=",
+ "enr:-Ie4QP4mbZPiuYMGJxpbV1bb5KwYz69pONum1XLQWJNscrABMZMaR8-mco4vZRwHpJfLV-Xq-2MMmGPcWvKGurzoV8SAgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQPXTrrbopT8F81z6nd9BP6OMaiXdU4hovsGz4alw74JkIN0Y3ADg3VkcAQ=",
+ "enr:-Ie4QCvui6MrHdcZmCmpenVBzfJ7kylTt2gHBvG-C5Hy7fZCXeM2Ct0NUBcuQZUQgKwiRpIza1qVUFUttaWO7RHwDx6AgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQLmzL1T7YS3su4_059MUAQD3Dk8PM8Jh_1qq8jUzeaRWoN0Y3AFg3VkcAY=",
+ "enr:-Ie4QCok_dUP2L9mVUPpLdVl_VLcTwESD7Xd4WYRSbPq__srFVsJT4MPxsQOP68BPXw2IMWvThA6SfBs-PMne__srdSAgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQLITI6sd1v-A1ArY0oBvIjGPsJWjc1XvbIxjWr1jvRSA4N0Y3AHg3VkcAg=",
+ "enr:-Ie4QAaRwPBsUloA1AlLmgjRx-zIHipzM06ioU2hH9Uv-mKRNUfScDInXlPGomDslz3QbAu0gxR-Jgq7d3SKHohjI7SAgmlkgnY0gmlwhH8AAAGJc2VjcDI1NmsxoQNfEIdLwYgnPux1pXBg5enZ8jlIsPzMtHAJH1tnRfeMiYN0Y3AJg3VkcAo="
+ ]
+}
+
+```
diff --git a/docs/versioned_docs/version-v0.3.0/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.3.0/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..0aaa2d92f4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dv/09_charon_cli_reference.md
@@ -0,0 +1,90 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`0.3.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.3.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+```markdown
+charon --help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Starts a p2p-udp discv5 bootnode
+ completion Generate the autocompletion script for the specified shell
+ create-cluster Create a local charon cluster
+ enr Return this node's ENR
+ gen-p2pkey Generates a new p2p key
+ help Help about any command
+ run Runs the Charon middleware
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+```markdown
+charon create-cluster --help
+Create a local charon cluster including validator keys, charon p2p keys, and a cluster manifest. See flags for supported features.
+
+Usage:
+ charon create-cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default "./charon/cluster")
+ --config Enables creation of local non-docker config files.
+ --config-binary string Path of the charon binary to use in the config files. Defaults to this binary if empty. Requires --config.
+ --config-port-start int Starting port number used in config files. Requires --config. (default 16000)
+ --config-simnet Configures a simulated network cluster with mock beacon node and mock validator clients. It showcases a running charon in isolation. Requires --config. (default true)
+ -h, --help Help for create-cluster
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Enables splitting of existing non-dvt validator keys into distributed threshold private shares (instead of creating new random keys).
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-validator-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+
+```
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL (default "http://localhost/")
+ --data-dir string The directory where charon will store all its internal data (default "./charon/data")
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing
+ --jaeger-service string Service name used for jaeger tracing (default "charon")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --manifest-file string The path to the manifest file defining distributed validator cluster (default "./charon/manifest.json")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof) (default "127.0.0.1:16001")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootmanifest Enables using manifest ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. Example: enode://@10.3.58.6:30303?discport=30301.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-peerdb string Path to store a discv5 peer database. Empty default results in in-memory database.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API (default "127.0.0.1:16002")
+```
diff --git a/docs/versioned_docs/version-v0.3.0/dv/README.md b/docs/versioned_docs/version-v0.3.0/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.3.0/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.3.0/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..ae776429e4
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,16 @@
+---
+Description: >-
+ An Effort to Accelerate and Standardise the Generation of Distributed
+ Validators
+---
+
+# Distributed validator keys
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a **distributed key generation ceremony**.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
+
+**There is currently an active working group developing DKG.** Further information can be seen on the [working groups](../int/working-groups.md) page.
+
diff --git a/docs/versioned_docs/version-v0.3.0/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.3.0/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..e7b18c1cd8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,13 @@
+---
+Description: A dapp to securely create Distributed Validator keys alone or with a group.
+---
+
+# Distributed validator launchpad
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.3.0/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.3.0/dvk/03_dkg_cli_reference.md b/docs/versioned_docs/version-v0.3.0/dvk/03_dkg_cli_reference.md
new file mode 100644
index 0000000000..3d4f1dddeb
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dvk/03_dkg_cli_reference.md
@@ -0,0 +1,88 @@
+---
+Description: >-
+ A rust-based CLI client for hosting and participating in Distributed Validator key generation ceremonies.
+---
+
+# DKG CLI reference
+
+
+:::warning
+
+The `dkg-poc` client is a prototype implementation for generating Distributed Validator Keys. Keys generated with this tool will not work with Charon, and they are not suitable for use.
+
+:::
+
+The following is a reference for `dkg-poc` at commit [`6181fea`](https://github.com/ObolNetwork/dkg-poc/commit/6181feaab2f60bdaaec954f11c04ef49c0b3366a). Find the latest release on our [Github](https://github.com/ObolNetwork/dkg-poc).
+
+`dkg-poc` is implemented as a rust-based webserver for performing a distributed key generation ceremony. This deployment model ended up raising many user experience and security concerns, for example it is both hard and likely insecure to setup a TLS protected webserver at home if you are not a specialist in this area. Further, the PoC is based on an [Aggregatable DKG](https://github.com/kobigurk/aggregatable-dkg) library which is built on sharing a group element rather than a field element, which makes the threshold signing scheme more complex as a result. These factors resulted in a deprecation of this approach, with many valuable insights gained from this client. Currently a DV launchpad and charon based DKG flow serves as the intended [DKG architecture](https://github.com/ObolNetwork/charon/blob/main/docs/dkg.md) for creating Distributed Validator Clusters.
+
+```
+$ dkg-poc --help
+
+dkg-poc 0.1.0
+A Distributed Validator Key Generation client for the Obol Network.
+
+USAGE:
+ dkg-poc
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+SUBCOMMANDS:
+ help Prints this message or the help of the given subcommand(s)
+ lead Lead a new DKG ceremony
+ participate Participate in a DKG ceremony
+
+```
+
+```
+$ dkg-poc lead --help
+
+dkg-poc-lead 0.1.0
+Lead a new DKG ceremony
+
+USAGE:
+ dkg-poc lead [OPTIONS] --num-participants --threshold
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address
+ The address to bind this client to, to participate in the DKG ceremony (Default: 127.0.0.1:8081)
+
+ -e, --enr
+ Provide existing charon ENR for this participant instead of generating a new private key to import
+
+ -n, --num-participants The number of participants taking part in the DKG ceremony
+ -p, --password
+ Password to join the ceremony (Default is to randomly generate a password)
+
+ -t, --threshold
+ Sets the threshold at which point a group of shareholders can create valid signatures
+
+```
+
+```
+$ dkg-poc participate --help
+
+dkg-poc-participate 0.1.0
+Participate in a DKG ceremony
+
+USAGE:
+ dkg-poc participate [OPTIONS] --leader-address
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address The address to bind this client to, to participate in the DKG ceremony
+ (Default: 127.0.0.1:8081)
+ -e, --enr Provide existing charon ENR for this participant instead of generating a new
+ private key to import
+ -l, --leader-address The address of the webserver leading the DKG ceremony
+ -p, --password Password to join the ceremony (Default is to randomly generate a password)
+```
diff --git a/docs/versioned_docs/version-v0.3.0/dvk/README.md b/docs/versioned_docs/version-v0.3.0/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.3.0/fr/README.md b/docs/versioned_docs/version-v0.3.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.3.0/fr/eth.md b/docs/versioned_docs/version-v0.3.0/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.3.0/fr/golang.md b/docs/versioned_docs/version-v0.3.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.3.0/glossary.md b/docs/versioned_docs/version-v0.3.0/glossary.md
new file mode 100644
index 0000000000..87fbace906
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/glossary.md
@@ -0,0 +1,9 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.3.0/int/README.md b/docs/versioned_docs/version-v0.3.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.3.0/int/faq.md b/docs/versioned_docs/version-v0.3.0/int/faq.md
new file mode 100644
index 0000000000..ca366842bc
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/int/faq.md
@@ -0,0 +1,24 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
diff --git a/docs/versioned_docs/version-v0.3.0/int/key-concepts.md b/docs/versioned_docs/version-v0.3.0/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.3.0/int/overview.md b/docs/versioned_docs/version-v0.3.0/int/overview.md
new file mode 100644
index 0000000000..e178579dbd
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling main chain staking by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling main chain staking while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.3.0/int/working-groups.md b/docs/versioned_docs/version-v0.3.0/int/working-groups.md
new file mode 100644
index 0000000000..a644adb3c1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 3
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 60%
+- Phase 1: 25%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.3.0/intro.md b/docs/versioned_docs/version-v0.3.0/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.3.0/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.3.0/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..c1a650d6da
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://www.dropbox.com/s/z8kpyl5r2lh1ixe/Screenshot%202021-12-26%20at%2013.53.48.png?dl=0) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.3.0/sc/README.md b/docs/versioned_docs/version-v0.3.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.3.0/testnet.md b/docs/versioned_docs/version-v0.3.0/testnet.md
new file mode 100644
index 0000000000..9c8cce3f90
--- /dev/null
+++ b/docs/versioned_docs/version-v0.3.0/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs will be coordinating and hosting a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [ ] Dev Net 1
+* [ ] Dev Net 2
+* [ ] Athena Public Testnet 1
+* [ ] Bia Attack net
+* [ ] Circe Public Testnet 2
+* [ ] Demeter Red/Blue net
+
+### Devnet 1
+
+The first devnet aim will be to have a number of trusted operators test out our earliest tutorial flows. The aim will be for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with the option of adding a single consensus layer client from a weak subjectivity checkpoint (the default will be to connect to our Kiln RPC server, we shouldn't get too much load for this phase). The keys will be created locally in charon, and activated with the existing launchpad or ethdo.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a newtwork.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aim will be to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim will be for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators will need to expose charon to the public internet on a static IP address.
+
+This devnet will also be the first time `charon dkg` is tested with users. The launchpad is not anticipated to be complete, and this dkg will be triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet will be to collect network performance data. This will be the first time we will have charon run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet will be a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+This testnet would be intended to include the Distributed Validator Launchpad.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** June 2022
+
+**Duration:** 2 week setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** August 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** September 2022 ([Dappcon](https://www.dappcon.io/) runs 12th-14th of Sept. )
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** October 2022 ([Devcon 6](https://devcon.org/en/#road-to-devcon) runs 7th-16th of October. )
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v0.4.0/README.md b/docs/versioned_docs/version-v0.4.0/README.md
new file mode 100644
index 0000000000..25d5dd39fa
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.4.0
+
diff --git a/docs/versioned_docs/version-v0.4.0/cg/README.md b/docs/versioned_docs/version-v0.4.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.4.0/cg/bug-report.md b/docs/versioned_docs/version-v0.4.0/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/website/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/ the may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.4.0/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.4.0/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..52b7c76ef2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dv/01_introducing-charon.md
@@ -0,0 +1,27 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.4.0/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run ghcr.io/obolnetwork/charon:v0.4.0 --help
+```
diff --git a/docs/versioned_docs/version-v0.4.0/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.4.0/dv/02_validator-creation.md
new file mode 100644
index 0000000000..fb97fc6c90
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dv/02_validator-creation.md
@@ -0,0 +1,32 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a `cluster configuration`.
+ * This operator also sets their charon client's Ethereum Node Record (ENR).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * The submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster definition file.
+4. Every operator loads this cluster definition file into `charon dkg`. The definition provides the charon process with the information it needs to complete the DKG ceremony with the other charon clients.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A cluster lockfile, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated threshold verifiers. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator exit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares and definition.lock file.
+7. All operators load the keys and cluster lockfiles generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.4.0/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.4.0/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..f8e8bad3b3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dv/04_middleware-daemon.md
@@ -0,0 +1,17 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware Architecture
+
+
+
+The Charon daemon sits as a middleware between the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/) and any downstream validator clients.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+The daemon offers a config reload instruction through Unix signals which is useful to join or leave Obol clusters on-the-fly without interruption.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
diff --git a/docs/versioned_docs/version-v0.4.0/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.4.0/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..70b5626cc3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dv/06_peer-discovery.md
@@ -0,0 +1,37 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security and sybil-resistance, charon clients need to be able to authenticate one another. We achieve this by giving each charon client a public/private key pair that they can sign with such that other clients in the cluster will be able to recognise them as legitimate no matter which IP address they communicate from.
+
+At the end of a [DKG ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster.lock` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit and exit data:** These files are used to activate and deactivate (exit) a distributed validator on the Ethereum network.
+
+### Authenticating a distributed validator client
+
+Before a DKG process begins, all operators must run `charon create enr`, or just `charon enr`, to create or get the Ethereum Node Record for their client. These ENRs are included in the configuration of a key generation ceremony.
+
+The file that outlines a DKG ceremony is known as a [`cluster_definition`](./08_distributed-validator-cluster-manifest.md) file. This file is passed to `charon dkg` which uses it to create private keys, a cluster lock file and deposit and exit data for the configured number of distributed validators. The cluster.lock file will be made available to `charon run`, and the validator key stores will be made available to the configured validator client.
+
+When `charon run` starts up and ingests its configuration from the `cluster.lock` file, it checks if its observed/configured public IP address differs from what is listed in the lock file. If it is different; it updates the IP address, increments the nonce of the ENR and reissues it before beginning to establish connections with the other operators in the cluster.
+
+#### Node database
+
+Distributed Validator Clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely. This is acceptable as the number of operators in a cluster is expected to stay constant. Mutable cluster operators will be introduced in future.
+
+#### Node discovery
+
+At boot, a charon client will ingest it's configured `cluster.lock` file. This file contains a list of ENRs of the client's peers. The client will attempt to establish a connection with these peers, and will perform a handshake if they connect.
+
+However, the IP addresses within an ENR can become stale. This could result in a cluster not being able to establish a connection with all nodes. To be tolerant to operator IP addresses changing, charon also supports the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) discovery protocol. This allows a charon client to find another operator that might have moved IP address, but still retains the same ENR private key.
+
+
diff --git a/docs/versioned_docs/version-v0.4.0/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.4.0/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..73f4bd18da
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dv/07_p2p-interface.md
@@ -0,0 +1,13 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Identity: [Ethereum Node Records](https://eips.ethereum.org/EIPS/eip-778).
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Each charon client must have their ENR public key authorized in a [cluster.lock](./08_distributed-validator-cluster-manifest.md) file in order for the client handshake to succeed.
+- Discovery: [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md).
diff --git a/docs/versioned_docs/version-v0.4.0/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.4.0/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..63123d6cd9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,67 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Cluster Configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client (or cluster) locally or in production.
+
+## Cluster Configuration Files
+
+A charon cluster is configured in two steps:
+- `cluster_definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster.lock` which includes and extends `cluster_definition.json` with distributed validator bls public key shares and verifiers.
+
+The `charon create dkg` command is used to create `cluster_definition.json` file which is used as input to `charon dkg`.
+
+The `charon create cluster` command combines both steps into one and just outputs the final `cluster_lock.json` without a DKG step.
+
+The schema of the `cluster_definition.json` is defined as:
+```json
+{
+ "version": "v1.0.0", // Schema version
+ "num_validators": 100, // Number of validators to create in cluster.lock
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier
+ "name": "best cluster", // Optional name field, cosmetic.
+ "fee_recipient_address":"0x123..abfc",// ETH1 fee_recipient address
+ "withdrawal_address": "0x123..abfc", // ETH1 withdrawal address
+ "algorithm": "foo_dkg_v1" , // Optional DKG algorithm
+ "fork_version": "0x00112233", // Fork version lock, enum of known values
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 operator identify address
+ "enr": "enr://abcdef...12345", // charon client ENR
+ "signature": "123456...abcdef", // Signature of enr by ETH1 address priv key
+ "nonce": 1 // Nonce of signature
+ }
+ ],
+ "definition_hash": "abcdef...abcedef",// Hash of above field (except free text)
+ "operator_signatures": [ // Operator signatures (seals) of definition hash
+ "123456...abcdef",
+ "123456...abcdef"
+ ]
+}
+```
+
+The above `cluster_definition.json` is provided as input to the DKG which generates keys and the `cluster_lock.json` file.
+
+The `cluster_lock.json` has the following schema:
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equaled to num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "threshold_verifiers": [ "oA8Z...2XyT", "g1q...icu"], // length of threshold
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
diff --git a/docs/versioned_docs/version-v0.4.0/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.4.0/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..ad2a1a4023
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dv/09_charon_cli_reference.md
@@ -0,0 +1,206 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.4.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.4.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+### Available Commands
+
+The following are the top-level commands available to use.
+
+```markdown
+charon help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Start a discv5 bootnode server
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ create-cluster Create a local charon cluster [DEPRECATED]
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print this client's Ethereum Node Record
+ gen-p2pkey Generates a new p2p key [DEPRECATED]
+ help Help about any command
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+### `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+
+```
+
+#### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ -h, --help Help for enr
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootmanifest Enables using manifest ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. Example: enode://@10.3.58.6:30303?discport=30301.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-peerdb string Path to store a discv5 peer database. Empty default results in in-memory database.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+```
+
+#### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster.lock` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, and a cluster manifest. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
+ --config Enables creation of local non-docker config files.
+ --config-binary string Path of the charon binary to use in the config files. Defaults to this binary if empty. Requires --config.
+ --config-port-start int Starting port number used in config files. Requires --config. (default 16000)
+ --config-simnet Configures a simulated network cluster with mock beacon node and mock validator clients. It showcases a running charon in isolation. Requires --config. (default true)
+ -h, --help Help for cluster
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Enables splitting of existing non-dvt validator keys into distributed threshold private shares (instead of creating new random keys).
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-validator-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+```
+
+#### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --fee_recipient_address string Optional Ethereum address of the fee recipient
+ --fork_version string Optional hex fork version identifying the target network/chain
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator_enrs strings Comma-separated list of each operator's Charon ENR address
+ --output-dir string The folder to write the output cluster_definition.json file to. (default ".")
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal_address string Withdrawal Ethereum address
+```
+
+### Performing a DKG Ceremony
+
+Th `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit and exit data for each new distributed validator. The command outputs the `cluster.lock` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ --definition-file string The path to the cluster definition file. (default ".charon/cluster_definition.json")
+ -h, --help Help for dkg
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootmanifest Enables using manifest ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. Example: enode://@10.3.58.6:30303?discport=30301.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-peerdb string Path to store a discv5 peer database. Empty default results in in-memory database.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+```
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster.lock` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL (default "http://localhost/")
+ --data-dir string The directory where charon will store all its internal data (default "./charon/data")
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing
+ --jaeger-service string Service name used for jaeger tracing (default "charon")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --manifest-file string The path to the manifest file defining distributed validator cluster (default "./charon/manifest.json")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof) (default "127.0.0.1:16001")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootmanifest Enables using manifest ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. Example: enode://@10.3.58.6:30303?discport=30301.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-peerdb string Path to store a discv5 peer database. Empty default results in in-memory database.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API (default "127.0.0.1:16002")
+```
diff --git a/docs/versioned_docs/version-v0.4.0/dv/README.md b/docs/versioned_docs/version-v0.4.0/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.4.0/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.4.0/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..bf3d926969
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,119 @@
+---
+Description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+---
+
+# Distributed Validator Key Generation
+
+## Contents
+
+- [Overview](#overview)
+- [Actors involved](#actors-involved)
+- [Cluster Definition creation](#cluster-definition-creation)
+- [Carrying out the DKG ceremony](#carrying-out-the-dkg-ceremony)
+- [Backing up ceremony artifacts](#backing-up-the-ceremony-artifacts)
+- [Preparing for validator activation](#preparing-for-validator-activation)
+- [DKG verification](#dkg-verification)
+- [Appendix](#appendix)
+
+## Overview
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together. (4 randomly chosen points on a graph don't all neccessarily sit on the same order three curve.) To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a distributed key generation ceremony.
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://docs.obol.tech/docs/dv/distributed-validator-cluster-manifest).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign with this address's private key to authenticate their charon client ahead of the ceremony. The signature will be of; a hash of the charon clients ENR public key, the `cluster_definition_hash`, and an incrementing `nonce`, allowing for a direct linkage between a user, their charon client, and the cluster this client is intended to service, while retaining the ability to update the charon client by incrementing the nonce value and re-signing like the standard ENR spec.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p-noise). These keys need to be created by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This definition file is created with the help of the [Distributed Validator Launchpad](https://docs.obol.tech/docs/dvk/distributed_validator_launchpad). The creation process involves a number of steps.
+
+- A `leader` Operator, that wishes to co-ordinate the creation of a new Distributed Validator Cluster navigates to the launch pad and selects "Create new Cluster"
+- The `leader` uses the user interface to configure all of the important details about the cluster including:
+ - The `withdrawal address` for the created validators
+ - The `feeRecipient` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required (if not choosing the safe default)
+ - The network (fork_version/chainId) that this cluster will validate on
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialised and merklised to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that their is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the leader is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralised backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralisation of the launchpad.)
+- The leader will then share the URL to this ceremony with their intended participants.
+- Anyone that clicks the ceremony url, or inputs the `cluster_definition_hash` when prompted on the landing page will be brought to the ceremony status page. (After completing all disclaimers and advisories)
+- A "Connect Wallet" button will be visible beneath the ceremony status container, a participant can click on it to connect their wallet to the site
+ - If the participant connects a wallet that is not in the participant list, the button disables, as there is nothing to do
+ - If the participant connects a wallet that is in the participant list, they get prompted to input the ENR of their charon node.
+ - If the ENR field is populated and validated the participant can now see a "Confirm Cluster Configuration" button. This button triggers one/two signatures.
+ - The participant signs the `cluster_definition_hash`, to prove they are consensting to this exact configuration.
+ - The participant signs their charon node's ENR, to authenticate and authorise that specific charon node to participate on their behalf in the distributed validator cluster.
+ - These/this signature is sent to the data availability layer, where it verifies the signatures are correct for the given participants ethereum address. If the signatures pass validation, the signature of the definition hash and the the ENR + signature get saved to the definition object.
+- All participants in the list must sign the definition hash and submit a signed ENR before a DKG ceremony can begin. The outstanding signatures can be easily displayed on the status page.
+- Finally, once all participants have signed their approval, and submitted a charon node ENR to act on their behalf, the definition data can be downloaded as a file if the users click a newly displayed button, `Download Manifest`.
+- At this point each participant must load this definition into their charon client, and the client will attempt to complete the DKG.
+
+## Carrying out the DKG ceremony
+
+Once participant has their definition file prepared, they will pass the file to charon's `dkg` command. Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to bootnodes that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen be a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+```sh
+./cluster_definition.json # The original definition file from the DV Launchpad
+./cluster.lock # New lockfile based on cluster_definition.json with validator group public keys and threshold BLS verifiers included with the initial cluster config
+./charon/enr_private_key # Created before the ceremony took place [Back this up]
+./charon/validator_keys/ # Folder of key shares to be backed up and moved to validator client [Back this up]
+./charon/deposit_data # JSON file of deposit data for the distributed validators
+./charon/exit_data # JSON file of exit data that ethdo can broadcast
+```
+
+## Backing up the ceremony artifacts
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favour of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## Preparing for validator activation
+
+Once the ceremony is complete, and secure backups of key shares have been made by each operator. They must now load these key shares into their validator clients, and run the `charon run` command to turn it into operational mode.
+
+All operators should confirm that their charon client logs indicate all nodes are online and connected. They should also verify the readiness of their beacon clients and validator clients. Charon's grafana dashboard is a good way to see the readiness of the full cluster from its perspective.
+
+Once all operators are satisfied with network connectivity, one member can use the Obol Distributed Validator deposit flow to send the required ether and deposit data to the deposit contract, beginning the process of a distributed validator activation. Good luck.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it doe not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Using DKG without the launchpad
+
+Charon clients can do a DKG with a definition file that does not contain operator signatures if you pass a `--no-verify` flag to `charon dkg`. This can be used for testing purposes when strict signature verification is not of the utmost importance.
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../dv/08_distributed-validator-cluster-manifest.md#cluster-configuration-files).
+
diff --git a/docs/versioned_docs/version-v0.4.0/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.4.0/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..8d754b2add
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,15 @@
+---
+Description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Distributed Validator launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.4.0/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.4.0/dvk/03_dkg_cli_reference.md b/docs/versioned_docs/version-v0.4.0/dvk/03_dkg_cli_reference.md
new file mode 100644
index 0000000000..3d4f1dddeb
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dvk/03_dkg_cli_reference.md
@@ -0,0 +1,88 @@
+---
+Description: >-
+ A rust-based CLI client for hosting and participating in Distributed Validator key generation ceremonies.
+---
+
+# DKG CLI reference
+
+
+:::warning
+
+The `dkg-poc` client is a prototype implementation for generating Distributed Validator Keys. Keys generated with this tool will not work with Charon, and they are not suitable for use.
+
+:::
+
+The following is a reference for `dkg-poc` at commit [`6181fea`](https://github.com/ObolNetwork/dkg-poc/commit/6181feaab2f60bdaaec954f11c04ef49c0b3366a). Find the latest release on our [Github](https://github.com/ObolNetwork/dkg-poc).
+
+`dkg-poc` is implemented as a rust-based webserver for performing a distributed key generation ceremony. This deployment model ended up raising many user experience and security concerns, for example it is both hard and likely insecure to setup a TLS protected webserver at home if you are not a specialist in this area. Further, the PoC is based on an [Aggregatable DKG](https://github.com/kobigurk/aggregatable-dkg) library which is built on sharing a group element rather than a field element, which makes the threshold signing scheme more complex as a result. These factors resulted in a deprecation of this approach, with many valuable insights gained from this client. Currently a DV launchpad and charon based DKG flow serves as the intended [DKG architecture](https://github.com/ObolNetwork/charon/blob/main/docs/dkg.md) for creating Distributed Validator Clusters.
+
+```
+$ dkg-poc --help
+
+dkg-poc 0.1.0
+A Distributed Validator Key Generation client for the Obol Network.
+
+USAGE:
+ dkg-poc
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+SUBCOMMANDS:
+ help Prints this message or the help of the given subcommand(s)
+ lead Lead a new DKG ceremony
+ participate Participate in a DKG ceremony
+
+```
+
+```
+$ dkg-poc lead --help
+
+dkg-poc-lead 0.1.0
+Lead a new DKG ceremony
+
+USAGE:
+ dkg-poc lead [OPTIONS] --num-participants --threshold
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address
+ The address to bind this client to, to participate in the DKG ceremony (Default: 127.0.0.1:8081)
+
+ -e, --enr
+ Provide existing charon ENR for this participant instead of generating a new private key to import
+
+ -n, --num-participants The number of participants taking part in the DKG ceremony
+ -p, --password
+ Password to join the ceremony (Default is to randomly generate a password)
+
+ -t, --threshold
+ Sets the threshold at which point a group of shareholders can create valid signatures
+
+```
+
+```
+$ dkg-poc participate --help
+
+dkg-poc-participate 0.1.0
+Participate in a DKG ceremony
+
+USAGE:
+ dkg-poc participate [OPTIONS] --leader-address
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address The address to bind this client to, to participate in the DKG ceremony
+ (Default: 127.0.0.1:8081)
+ -e, --enr Provide existing charon ENR for this participant instead of generating a new
+ private key to import
+ -l, --leader-address The address of the webserver leading the DKG ceremony
+ -p, --password Password to join the ceremony (Default is to randomly generate a password)
+```
diff --git a/docs/versioned_docs/version-v0.4.0/dvk/README.md b/docs/versioned_docs/version-v0.4.0/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.4.0/fr/README.md b/docs/versioned_docs/version-v0.4.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.4.0/fr/eth.md b/docs/versioned_docs/version-v0.4.0/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.4.0/fr/golang.md b/docs/versioned_docs/version-v0.4.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.4.0/glossary.md b/docs/versioned_docs/version-v0.4.0/glossary.md
new file mode 100644
index 0000000000..87fbace906
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/glossary.md
@@ -0,0 +1,9 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.4.0/int/README.md b/docs/versioned_docs/version-v0.4.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.4.0/int/faq.md b/docs/versioned_docs/version-v0.4.0/int/faq.md
new file mode 100644
index 0000000000..ca366842bc
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/int/faq.md
@@ -0,0 +1,24 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
diff --git a/docs/versioned_docs/version-v0.4.0/int/key-concepts.md b/docs/versioned_docs/version-v0.4.0/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.4.0/int/overview.md b/docs/versioned_docs/version-v0.4.0/int/overview.md
new file mode 100644
index 0000000000..e178579dbd
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling main chain staking by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling main chain staking while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.4.0/int/working-groups.md b/docs/versioned_docs/version-v0.4.0/int/working-groups.md
new file mode 100644
index 0000000000..a644adb3c1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 3
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 60%
+- Phase 1: 25%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.4.0/intro.md b/docs/versioned_docs/version-v0.4.0/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.4.0/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.4.0/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..c1a650d6da
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://www.dropbox.com/s/z8kpyl5r2lh1ixe/Screenshot%202021-12-26%20at%2013.53.48.png?dl=0) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.4.0/sc/README.md b/docs/versioned_docs/version-v0.4.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.4.0/testnet.md b/docs/versioned_docs/version-v0.4.0/testnet.md
new file mode 100644
index 0000000000..9c8cce3f90
--- /dev/null
+++ b/docs/versioned_docs/version-v0.4.0/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs will be coordinating and hosting a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [ ] Dev Net 1
+* [ ] Dev Net 2
+* [ ] Athena Public Testnet 1
+* [ ] Bia Attack net
+* [ ] Circe Public Testnet 2
+* [ ] Demeter Red/Blue net
+
+### Devnet 1
+
+The first devnet aim will be to have a number of trusted operators test out our earliest tutorial flows. The aim will be for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with the option of adding a single consensus layer client from a weak subjectivity checkpoint (the default will be to connect to our Kiln RPC server, we shouldn't get too much load for this phase). The keys will be created locally in charon, and activated with the existing launchpad or ethdo.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a newtwork.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aim will be to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim will be for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators will need to expose charon to the public internet on a static IP address.
+
+This devnet will also be the first time `charon dkg` is tested with users. The launchpad is not anticipated to be complete, and this dkg will be triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet will be to collect network performance data. This will be the first time we will have charon run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet will be a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+This testnet would be intended to include the Distributed Validator Launchpad.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** June 2022
+
+**Duration:** 2 week setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** August 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** September 2022 ([Dappcon](https://www.dappcon.io/) runs 12th-14th of Sept. )
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** October 2022 ([Devcon 6](https://devcon.org/en/#road-to-devcon) runs 7th-16th of October. )
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v0.5.0/README.md b/docs/versioned_docs/version-v0.5.0/README.md
new file mode 100644
index 0000000000..2ecb3878a6
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.5.0
+
diff --git a/docs/versioned_docs/version-v0.5.0/cg/README.md b/docs/versioned_docs/version-v0.5.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.5.0/cg/bug-report.md b/docs/versioned_docs/version-v0.5.0/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/website/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/ the may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.5.0/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.5.0/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..2b1a1409ec
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dv/01_introducing-charon.md
@@ -0,0 +1,29 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.5.0/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run ghcr.io/obolnetwork/charon:v0.4.0 --help
+```
+
+For more information on running charon, take a look at our [quickstart guide](../int/quickstart.md).
diff --git a/docs/versioned_docs/version-v0.5.0/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.5.0/dv/02_validator-creation.md
new file mode 100644
index 0000000000..fb97fc6c90
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dv/02_validator-creation.md
@@ -0,0 +1,32 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a `cluster configuration`.
+ * This operator also sets their charon client's Ethereum Node Record (ENR).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * The submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster definition file.
+4. Every operator loads this cluster definition file into `charon dkg`. The definition provides the charon process with the information it needs to complete the DKG ceremony with the other charon clients.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A cluster lockfile, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated threshold verifiers. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator exit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares and definition.lock file.
+7. All operators load the keys and cluster lockfiles generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.5.0/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.5.0/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..f8e8bad3b3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dv/04_middleware-daemon.md
@@ -0,0 +1,17 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware Architecture
+
+
+
+The Charon daemon sits as a middleware between the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/) and any downstream validator clients.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+The daemon offers a config reload instruction through Unix signals which is useful to join or leave Obol clusters on-the-fly without interruption.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
diff --git a/docs/versioned_docs/version-v0.5.0/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.5.0/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..70b5626cc3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dv/06_peer-discovery.md
@@ -0,0 +1,37 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security and sybil-resistance, charon clients need to be able to authenticate one another. We achieve this by giving each charon client a public/private key pair that they can sign with such that other clients in the cluster will be able to recognise them as legitimate no matter which IP address they communicate from.
+
+At the end of a [DKG ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster.lock` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit and exit data:** These files are used to activate and deactivate (exit) a distributed validator on the Ethereum network.
+
+### Authenticating a distributed validator client
+
+Before a DKG process begins, all operators must run `charon create enr`, or just `charon enr`, to create or get the Ethereum Node Record for their client. These ENRs are included in the configuration of a key generation ceremony.
+
+The file that outlines a DKG ceremony is known as a [`cluster_definition`](./08_distributed-validator-cluster-manifest.md) file. This file is passed to `charon dkg` which uses it to create private keys, a cluster lock file and deposit and exit data for the configured number of distributed validators. The cluster.lock file will be made available to `charon run`, and the validator key stores will be made available to the configured validator client.
+
+When `charon run` starts up and ingests its configuration from the `cluster.lock` file, it checks if its observed/configured public IP address differs from what is listed in the lock file. If it is different; it updates the IP address, increments the nonce of the ENR and reissues it before beginning to establish connections with the other operators in the cluster.
+
+#### Node database
+
+Distributed Validator Clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely. This is acceptable as the number of operators in a cluster is expected to stay constant. Mutable cluster operators will be introduced in future.
+
+#### Node discovery
+
+At boot, a charon client will ingest it's configured `cluster.lock` file. This file contains a list of ENRs of the client's peers. The client will attempt to establish a connection with these peers, and will perform a handshake if they connect.
+
+However, the IP addresses within an ENR can become stale. This could result in a cluster not being able to establish a connection with all nodes. To be tolerant to operator IP addresses changing, charon also supports the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) discovery protocol. This allows a charon client to find another operator that might have moved IP address, but still retains the same ENR private key.
+
+
diff --git a/docs/versioned_docs/version-v0.5.0/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.5.0/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..73f4bd18da
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dv/07_p2p-interface.md
@@ -0,0 +1,13 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Identity: [Ethereum Node Records](https://eips.ethereum.org/EIPS/eip-778).
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Each charon client must have their ENR public key authorized in a [cluster.lock](./08_distributed-validator-cluster-manifest.md) file in order for the client handshake to succeed.
+- Discovery: [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md).
diff --git a/docs/versioned_docs/version-v0.5.0/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.5.0/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..63123d6cd9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,67 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Cluster Configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client (or cluster) locally or in production.
+
+## Cluster Configuration Files
+
+A charon cluster is configured in two steps:
+- `cluster_definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster.lock` which includes and extends `cluster_definition.json` with distributed validator bls public key shares and verifiers.
+
+The `charon create dkg` command is used to create `cluster_definition.json` file which is used as input to `charon dkg`.
+
+The `charon create cluster` command combines both steps into one and just outputs the final `cluster_lock.json` without a DKG step.
+
+The schema of the `cluster_definition.json` is defined as:
+```json
+{
+ "version": "v1.0.0", // Schema version
+ "num_validators": 100, // Number of validators to create in cluster.lock
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier
+ "name": "best cluster", // Optional name field, cosmetic.
+ "fee_recipient_address":"0x123..abfc",// ETH1 fee_recipient address
+ "withdrawal_address": "0x123..abfc", // ETH1 withdrawal address
+ "algorithm": "foo_dkg_v1" , // Optional DKG algorithm
+ "fork_version": "0x00112233", // Fork version lock, enum of known values
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 operator identify address
+ "enr": "enr://abcdef...12345", // charon client ENR
+ "signature": "123456...abcdef", // Signature of enr by ETH1 address priv key
+ "nonce": 1 // Nonce of signature
+ }
+ ],
+ "definition_hash": "abcdef...abcedef",// Hash of above field (except free text)
+ "operator_signatures": [ // Operator signatures (seals) of definition hash
+ "123456...abcdef",
+ "123456...abcdef"
+ ]
+}
+```
+
+The above `cluster_definition.json` is provided as input to the DKG which generates keys and the `cluster_lock.json` file.
+
+The `cluster_lock.json` has the following schema:
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equaled to num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "threshold_verifiers": [ "oA8Z...2XyT", "g1q...icu"], // length of threshold
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
diff --git a/docs/versioned_docs/version-v0.5.0/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.5.0/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..54f9fffa5e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dv/09_charon_cli_reference.md
@@ -0,0 +1,204 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.4.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.4.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+### Available Commands
+
+The following are the top-level commands available to use.
+
+```markdown
+charon help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Start a discv5 bootnode server
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print this client's Ethereum Node Record
+ help Help about any command
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+### `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+
+```
+
+#### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ -h, --help Help for enr
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootmanifest Enables using manifest ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. Example: enode://@10.3.58.6:30303?discport=30301.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-peerdb string Path to store a discv5 peer database. Empty default results in in-memory database.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+```
+
+#### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster.lock` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, and a cluster manifest. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
+ --config Enables creation of local non-docker config files.
+ --config-binary string Path of the charon binary to use in the config files. Defaults to this binary if empty. Requires --config.
+ --config-port-start int Starting port number used in config files. Requires --config. (default 16000)
+ --config-simnet Configures a simulated network cluster with mock beacon node and mock validator clients. It showcases a running charon in isolation. Requires --config. (default true)
+ -h, --help Help for cluster
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Enables splitting of existing non-dvt validator keys into distributed threshold private shares (instead of creating new random keys).
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-validator-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+```
+
+#### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --fee_recipient_address string Optional Ethereum address of the fee recipient
+ --fork_version string Optional hex fork version identifying the target network/chain
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator_enrs strings Comma-separated list of each operator's Charon ENR address
+ --output-dir string The folder to write the output cluster_definition.json file to. (default ".")
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal_address string Withdrawal Ethereum address
+```
+
+### Performing a DKG Ceremony
+
+Th `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit and exit data for each new distributed validator. The command outputs the `cluster.lock` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ --definition-file string The path to the cluster definition file. (default ".charon/cluster_definition.json")
+ -h, --help Help for dkg
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootmanifest Enables using manifest ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. Example: enode://@10.3.58.6:30303?discport=30301.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-peerdb string Path to store a discv5 peer database. Empty default results in in-memory database.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+```
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster.lock` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL (default "http://localhost/")
+ --data-dir string The directory where charon will store all its internal data (default "./charon/data")
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing
+ --jaeger-service string Service name used for jaeger tracing (default "charon")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --manifest-file string The path to the manifest file defining distributed validator cluster (default "./charon/manifest.json")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof) (default "127.0.0.1:16001")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootmanifest Enables using manifest ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. Example: enode://@10.3.58.6:30303?discport=30301.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-peerdb string Path to store a discv5 peer database. Empty default results in in-memory database.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API (default "127.0.0.1:16002")
+```
diff --git a/docs/versioned_docs/version-v0.5.0/dv/README.md b/docs/versioned_docs/version-v0.5.0/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.5.0/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.5.0/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..bf3d926969
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,119 @@
+---
+Description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+---
+
+# Distributed Validator Key Generation
+
+## Contents
+
+- [Overview](#overview)
+- [Actors involved](#actors-involved)
+- [Cluster Definition creation](#cluster-definition-creation)
+- [Carrying out the DKG ceremony](#carrying-out-the-dkg-ceremony)
+- [Backing up ceremony artifacts](#backing-up-the-ceremony-artifacts)
+- [Preparing for validator activation](#preparing-for-validator-activation)
+- [DKG verification](#dkg-verification)
+- [Appendix](#appendix)
+
+## Overview
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together. (4 randomly chosen points on a graph don't all neccessarily sit on the same order three curve.) To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a distributed key generation ceremony.
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://docs.obol.tech/docs/dv/distributed-validator-cluster-manifest).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign with this address's private key to authenticate their charon client ahead of the ceremony. The signature will be of; a hash of the charon clients ENR public key, the `cluster_definition_hash`, and an incrementing `nonce`, allowing for a direct linkage between a user, their charon client, and the cluster this client is intended to service, while retaining the ability to update the charon client by incrementing the nonce value and re-signing like the standard ENR spec.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p-noise). These keys need to be created by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This definition file is created with the help of the [Distributed Validator Launchpad](https://docs.obol.tech/docs/dvk/distributed_validator_launchpad). The creation process involves a number of steps.
+
+- A `leader` Operator, that wishes to co-ordinate the creation of a new Distributed Validator Cluster navigates to the launch pad and selects "Create new Cluster"
+- The `leader` uses the user interface to configure all of the important details about the cluster including:
+ - The `withdrawal address` for the created validators
+ - The `feeRecipient` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required (if not choosing the safe default)
+ - The network (fork_version/chainId) that this cluster will validate on
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialised and merklised to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that their is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the leader is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralised backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralisation of the launchpad.)
+- The leader will then share the URL to this ceremony with their intended participants.
+- Anyone that clicks the ceremony url, or inputs the `cluster_definition_hash` when prompted on the landing page will be brought to the ceremony status page. (After completing all disclaimers and advisories)
+- A "Connect Wallet" button will be visible beneath the ceremony status container, a participant can click on it to connect their wallet to the site
+ - If the participant connects a wallet that is not in the participant list, the button disables, as there is nothing to do
+ - If the participant connects a wallet that is in the participant list, they get prompted to input the ENR of their charon node.
+ - If the ENR field is populated and validated the participant can now see a "Confirm Cluster Configuration" button. This button triggers one/two signatures.
+ - The participant signs the `cluster_definition_hash`, to prove they are consensting to this exact configuration.
+ - The participant signs their charon node's ENR, to authenticate and authorise that specific charon node to participate on their behalf in the distributed validator cluster.
+ - These/this signature is sent to the data availability layer, where it verifies the signatures are correct for the given participants ethereum address. If the signatures pass validation, the signature of the definition hash and the the ENR + signature get saved to the definition object.
+- All participants in the list must sign the definition hash and submit a signed ENR before a DKG ceremony can begin. The outstanding signatures can be easily displayed on the status page.
+- Finally, once all participants have signed their approval, and submitted a charon node ENR to act on their behalf, the definition data can be downloaded as a file if the users click a newly displayed button, `Download Manifest`.
+- At this point each participant must load this definition into their charon client, and the client will attempt to complete the DKG.
+
+## Carrying out the DKG ceremony
+
+Once participant has their definition file prepared, they will pass the file to charon's `dkg` command. Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to bootnodes that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen be a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+```sh
+./cluster_definition.json # The original definition file from the DV Launchpad
+./cluster.lock # New lockfile based on cluster_definition.json with validator group public keys and threshold BLS verifiers included with the initial cluster config
+./charon/enr_private_key # Created before the ceremony took place [Back this up]
+./charon/validator_keys/ # Folder of key shares to be backed up and moved to validator client [Back this up]
+./charon/deposit_data # JSON file of deposit data for the distributed validators
+./charon/exit_data # JSON file of exit data that ethdo can broadcast
+```
+
+## Backing up the ceremony artifacts
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favour of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## Preparing for validator activation
+
+Once the ceremony is complete, and secure backups of key shares have been made by each operator. They must now load these key shares into their validator clients, and run the `charon run` command to turn it into operational mode.
+
+All operators should confirm that their charon client logs indicate all nodes are online and connected. They should also verify the readiness of their beacon clients and validator clients. Charon's grafana dashboard is a good way to see the readiness of the full cluster from its perspective.
+
+Once all operators are satisfied with network connectivity, one member can use the Obol Distributed Validator deposit flow to send the required ether and deposit data to the deposit contract, beginning the process of a distributed validator activation. Good luck.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it doe not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Using DKG without the launchpad
+
+Charon clients can do a DKG with a definition file that does not contain operator signatures if you pass a `--no-verify` flag to `charon dkg`. This can be used for testing purposes when strict signature verification is not of the utmost importance.
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../dv/08_distributed-validator-cluster-manifest.md#cluster-configuration-files).
+
diff --git a/docs/versioned_docs/version-v0.5.0/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.5.0/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..a604cf3add
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,15 @@
+---
+Description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Distributed Validator launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.5.0/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.5.0/dvk/03_dkg_cli_reference.md b/docs/versioned_docs/version-v0.5.0/dvk/03_dkg_cli_reference.md
new file mode 100644
index 0000000000..3d4f1dddeb
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dvk/03_dkg_cli_reference.md
@@ -0,0 +1,88 @@
+---
+Description: >-
+ A rust-based CLI client for hosting and participating in Distributed Validator key generation ceremonies.
+---
+
+# DKG CLI reference
+
+
+:::warning
+
+The `dkg-poc` client is a prototype implementation for generating Distributed Validator Keys. Keys generated with this tool will not work with Charon, and they are not suitable for use.
+
+:::
+
+The following is a reference for `dkg-poc` at commit [`6181fea`](https://github.com/ObolNetwork/dkg-poc/commit/6181feaab2f60bdaaec954f11c04ef49c0b3366a). Find the latest release on our [Github](https://github.com/ObolNetwork/dkg-poc).
+
+`dkg-poc` is implemented as a rust-based webserver for performing a distributed key generation ceremony. This deployment model ended up raising many user experience and security concerns, for example it is both hard and likely insecure to setup a TLS protected webserver at home if you are not a specialist in this area. Further, the PoC is based on an [Aggregatable DKG](https://github.com/kobigurk/aggregatable-dkg) library which is built on sharing a group element rather than a field element, which makes the threshold signing scheme more complex as a result. These factors resulted in a deprecation of this approach, with many valuable insights gained from this client. Currently a DV launchpad and charon based DKG flow serves as the intended [DKG architecture](https://github.com/ObolNetwork/charon/blob/main/docs/dkg.md) for creating Distributed Validator Clusters.
+
+```
+$ dkg-poc --help
+
+dkg-poc 0.1.0
+A Distributed Validator Key Generation client for the Obol Network.
+
+USAGE:
+ dkg-poc
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+SUBCOMMANDS:
+ help Prints this message or the help of the given subcommand(s)
+ lead Lead a new DKG ceremony
+ participate Participate in a DKG ceremony
+
+```
+
+```
+$ dkg-poc lead --help
+
+dkg-poc-lead 0.1.0
+Lead a new DKG ceremony
+
+USAGE:
+ dkg-poc lead [OPTIONS] --num-participants --threshold
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address
+ The address to bind this client to, to participate in the DKG ceremony (Default: 127.0.0.1:8081)
+
+ -e, --enr
+ Provide existing charon ENR for this participant instead of generating a new private key to import
+
+ -n, --num-participants The number of participants taking part in the DKG ceremony
+ -p, --password
+ Password to join the ceremony (Default is to randomly generate a password)
+
+ -t, --threshold
+ Sets the threshold at which point a group of shareholders can create valid signatures
+
+```
+
+```
+$ dkg-poc participate --help
+
+dkg-poc-participate 0.1.0
+Participate in a DKG ceremony
+
+USAGE:
+ dkg-poc participate [OPTIONS] --leader-address
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address The address to bind this client to, to participate in the DKG ceremony
+ (Default: 127.0.0.1:8081)
+ -e, --enr Provide existing charon ENR for this participant instead of generating a new
+ private key to import
+ -l, --leader-address The address of the webserver leading the DKG ceremony
+ -p, --password Password to join the ceremony (Default is to randomly generate a password)
+```
diff --git a/docs/versioned_docs/version-v0.5.0/dvk/README.md b/docs/versioned_docs/version-v0.5.0/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.5.0/fr/README.md b/docs/versioned_docs/version-v0.5.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.5.0/fr/eth.md b/docs/versioned_docs/version-v0.5.0/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.5.0/fr/golang.md b/docs/versioned_docs/version-v0.5.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.5.0/glossary.md b/docs/versioned_docs/version-v0.5.0/glossary.md
new file mode 100644
index 0000000000..87fbace906
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/glossary.md
@@ -0,0 +1,9 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.5.0/int/README.md b/docs/versioned_docs/version-v0.5.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.5.0/int/faq.md b/docs/versioned_docs/version-v0.5.0/int/faq.md
new file mode 100644
index 0000000000..ca366842bc
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/int/faq.md
@@ -0,0 +1,24 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
diff --git a/docs/versioned_docs/version-v0.5.0/int/key-concepts.md b/docs/versioned_docs/version-v0.5.0/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.5.0/int/overview.md b/docs/versioned_docs/version-v0.5.0/int/overview.md
new file mode 100644
index 0000000000..e178579dbd
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling main chain staking by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling main chain staking while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.5.0/int/quickstart.md b/docs/versioned_docs/version-v0.5.0/int/quickstart.md
new file mode 100644
index 0000000000..8a4dd3a00c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/int/quickstart.md
@@ -0,0 +1,69 @@
+---
+sidebar_position: 4
+description: Take part in a distributed validator cluster
+---
+
+# Quickstart
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+There are two ways to test out a distributed validator.
+
+* Running the full cluster alone.
+* Running one node in a cluster with a group of other node operators.
+
+## Run a cluster alone
+
+1. Clone the [starter repo](https://github.com/ObolNetwork/charon-docker-compose) and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-docker-compose.git
+
+ # Change directory
+ cd charon-docker-compose/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Create the artifacts needed to run a testnet distributed validator cluster
+
+ ```sh
+ # Create a testnet distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:latest create cluster --cluster-dir=".charon/cluster" --withdrawal-address="0x000000000000000000000000000000000000dead"
+ ```
+4. Start the cluster
+
+ ```sh
+ # Start the distributed validator cluster
+ docker-compose up
+ ```
+5. Checkout the monitoring dashboard and see if things look all right
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+6. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+
+Congratulations, if this all worked you are now running a distributed validator cluster on a testnet. Try turning off a single node of the four and see if the validator stays online or begins missing duties, to see for yourself the fault-tolerance that can be added to proof of stake validation with this new Distributed Validator Technology.
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it.\*
+
+\*_Once charon creates validator exit data in an upcoming release._ :::
+
+## Run a cluster with others
+
+This section will be completed alongside version `v0.6.0`. Sit tight.
diff --git a/docs/versioned_docs/version-v0.5.0/int/working-groups.md b/docs/versioned_docs/version-v0.5.0/int/working-groups.md
new file mode 100644
index 0000000000..1ebf4332a9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 5
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 60%
+- Phase 1: 25%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.5.0/intro.md b/docs/versioned_docs/version-v0.5.0/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.5.0/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.5.0/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..c1a650d6da
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://www.dropbox.com/s/z8kpyl5r2lh1ixe/Screenshot%202021-12-26%20at%2013.53.48.png?dl=0) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.5.0/sc/README.md b/docs/versioned_docs/version-v0.5.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.5.0/testnet.md b/docs/versioned_docs/version-v0.5.0/testnet.md
new file mode 100644
index 0000000000..9c8cce3f90
--- /dev/null
+++ b/docs/versioned_docs/version-v0.5.0/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs will be coordinating and hosting a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [ ] Dev Net 1
+* [ ] Dev Net 2
+* [ ] Athena Public Testnet 1
+* [ ] Bia Attack net
+* [ ] Circe Public Testnet 2
+* [ ] Demeter Red/Blue net
+
+### Devnet 1
+
+The first devnet aim will be to have a number of trusted operators test out our earliest tutorial flows. The aim will be for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with the option of adding a single consensus layer client from a weak subjectivity checkpoint (the default will be to connect to our Kiln RPC server, we shouldn't get too much load for this phase). The keys will be created locally in charon, and activated with the existing launchpad or ethdo.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a newtwork.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aim will be to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim will be for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators will need to expose charon to the public internet on a static IP address.
+
+This devnet will also be the first time `charon dkg` is tested with users. The launchpad is not anticipated to be complete, and this dkg will be triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet will be to collect network performance data. This will be the first time we will have charon run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet will be a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+This testnet would be intended to include the Distributed Validator Launchpad.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** June 2022
+
+**Duration:** 2 week setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** August 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** September 2022 ([Dappcon](https://www.dappcon.io/) runs 12th-14th of Sept. )
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** October 2022 ([Devcon 6](https://devcon.org/en/#road-to-devcon) runs 7th-16th of October. )
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v0.6.0/README.md b/docs/versioned_docs/version-v0.6.0/README.md
new file mode 100644
index 0000000000..c7d17af85f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.6.0
+
diff --git a/docs/versioned_docs/version-v0.6.0/cg/README.md b/docs/versioned_docs/version-v0.6.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.6.0/cg/bug-report.md b/docs/versioned_docs/version-v0.6.0/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/website/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/ the may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.6.0/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.6.0/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..f17452c37a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dv/01_introducing-charon.md
@@ -0,0 +1,29 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.6.0/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run ghcr.io/obolnetwork/charon:v0.6.0 --help
+```
+
+For more information on running charon, take a look at our [quickstart guide](../int/quickstart.md).
diff --git a/docs/versioned_docs/version-v0.6.0/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.6.0/dv/02_validator-creation.md
new file mode 100644
index 0000000000..fb97fc6c90
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dv/02_validator-creation.md
@@ -0,0 +1,32 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a `cluster configuration`.
+ * This operator also sets their charon client's Ethereum Node Record (ENR).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * The submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster definition file.
+4. Every operator loads this cluster definition file into `charon dkg`. The definition provides the charon process with the information it needs to complete the DKG ceremony with the other charon clients.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A cluster lockfile, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated threshold verifiers. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator exit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares and definition.lock file.
+7. All operators load the keys and cluster lockfiles generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.6.0/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.6.0/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..f8e8bad3b3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dv/04_middleware-daemon.md
@@ -0,0 +1,17 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware Architecture
+
+
+
+The Charon daemon sits as a middleware between the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/) and any downstream validator clients.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+The daemon offers a config reload instruction through Unix signals which is useful to join or leave Obol clusters on-the-fly without interruption.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
diff --git a/docs/versioned_docs/version-v0.6.0/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.6.0/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..70b5626cc3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dv/06_peer-discovery.md
@@ -0,0 +1,37 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security and sybil-resistance, charon clients need to be able to authenticate one another. We achieve this by giving each charon client a public/private key pair that they can sign with such that other clients in the cluster will be able to recognise them as legitimate no matter which IP address they communicate from.
+
+At the end of a [DKG ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster.lock` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit and exit data:** These files are used to activate and deactivate (exit) a distributed validator on the Ethereum network.
+
+### Authenticating a distributed validator client
+
+Before a DKG process begins, all operators must run `charon create enr`, or just `charon enr`, to create or get the Ethereum Node Record for their client. These ENRs are included in the configuration of a key generation ceremony.
+
+The file that outlines a DKG ceremony is known as a [`cluster_definition`](./08_distributed-validator-cluster-manifest.md) file. This file is passed to `charon dkg` which uses it to create private keys, a cluster lock file and deposit and exit data for the configured number of distributed validators. The cluster.lock file will be made available to `charon run`, and the validator key stores will be made available to the configured validator client.
+
+When `charon run` starts up and ingests its configuration from the `cluster.lock` file, it checks if its observed/configured public IP address differs from what is listed in the lock file. If it is different; it updates the IP address, increments the nonce of the ENR and reissues it before beginning to establish connections with the other operators in the cluster.
+
+#### Node database
+
+Distributed Validator Clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely. This is acceptable as the number of operators in a cluster is expected to stay constant. Mutable cluster operators will be introduced in future.
+
+#### Node discovery
+
+At boot, a charon client will ingest it's configured `cluster.lock` file. This file contains a list of ENRs of the client's peers. The client will attempt to establish a connection with these peers, and will perform a handshake if they connect.
+
+However, the IP addresses within an ENR can become stale. This could result in a cluster not being able to establish a connection with all nodes. To be tolerant to operator IP addresses changing, charon also supports the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) discovery protocol. This allows a charon client to find another operator that might have moved IP address, but still retains the same ENR private key.
+
+
diff --git a/docs/versioned_docs/version-v0.6.0/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.6.0/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..73f4bd18da
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dv/07_p2p-interface.md
@@ -0,0 +1,13 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Identity: [Ethereum Node Records](https://eips.ethereum.org/EIPS/eip-778).
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Each charon client must have their ENR public key authorized in a [cluster.lock](./08_distributed-validator-cluster-manifest.md) file in order for the client handshake to succeed.
+- Discovery: [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md).
diff --git a/docs/versioned_docs/version-v0.6.0/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.6.0/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..63123d6cd9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,67 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Cluster Configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client (or cluster) locally or in production.
+
+## Cluster Configuration Files
+
+A charon cluster is configured in two steps:
+- `cluster_definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster.lock` which includes and extends `cluster_definition.json` with distributed validator bls public key shares and verifiers.
+
+The `charon create dkg` command is used to create `cluster_definition.json` file which is used as input to `charon dkg`.
+
+The `charon create cluster` command combines both steps into one and just outputs the final `cluster_lock.json` without a DKG step.
+
+The schema of the `cluster_definition.json` is defined as:
+```json
+{
+ "version": "v1.0.0", // Schema version
+ "num_validators": 100, // Number of validators to create in cluster.lock
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier
+ "name": "best cluster", // Optional name field, cosmetic.
+ "fee_recipient_address":"0x123..abfc",// ETH1 fee_recipient address
+ "withdrawal_address": "0x123..abfc", // ETH1 withdrawal address
+ "algorithm": "foo_dkg_v1" , // Optional DKG algorithm
+ "fork_version": "0x00112233", // Fork version lock, enum of known values
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 operator identify address
+ "enr": "enr://abcdef...12345", // charon client ENR
+ "signature": "123456...abcdef", // Signature of enr by ETH1 address priv key
+ "nonce": 1 // Nonce of signature
+ }
+ ],
+ "definition_hash": "abcdef...abcedef",// Hash of above field (except free text)
+ "operator_signatures": [ // Operator signatures (seals) of definition hash
+ "123456...abcdef",
+ "123456...abcdef"
+ ]
+}
+```
+
+The above `cluster_definition.json` is provided as input to the DKG which generates keys and the `cluster_lock.json` file.
+
+The `cluster_lock.json` has the following schema:
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equaled to num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "threshold_verifiers": [ "oA8Z...2XyT", "g1q...icu"], // length of threshold
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
diff --git a/docs/versioned_docs/version-v0.6.0/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.6.0/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..81b0ae5685
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dv/09_charon_cli_reference.md
@@ -0,0 +1,203 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.6.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.6.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+### Available Commands
+
+The following are the top-level commands available to use.
+
+```markdown
+charon help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Start a discv5 bootnode server
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print this client's Ethereum Node Record
+ help Help about any command
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+### `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+
+```
+
+#### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ -h, --help Help for enr
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+```
+
+#### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster.lock` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, and a cluster manifest. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
+ -h, --help Help for cluster
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Ethereum address to receive the returned stake and accrued rewards. (default "0x0000000000000000000000000000000000000000")
+```
+
+#### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, keycast, frost (default "default")
+ --fee-recipient-address string Optional Ethereum address of the fee recipient
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings Comma-separated list of each operator's Charon ENR address
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".")
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Withdrawal Ethereum address (default "0x0000000000000000000000000000000000000000")
+```
+
+### Performing a DKG Ceremony
+
+Th `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit and exit data for each new distributed validator. The command outputs the `cluster.lock` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ --definition-file string The path to the cluster definition file. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+```
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster.lock` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL (default "http://localhost/")
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing
+ --jaeger-service string Service name used for jaeger tracing (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster (default ".charon/cluster-lock.json")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof) (default "127.0.0.1:16001")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API (default "127.0.0.1:16002")
+```
diff --git a/docs/versioned_docs/version-v0.6.0/dv/README.md b/docs/versioned_docs/version-v0.6.0/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.6.0/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.6.0/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..bf3d926969
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,119 @@
+---
+Description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+---
+
+# Distributed Validator Key Generation
+
+## Contents
+
+- [Overview](#overview)
+- [Actors involved](#actors-involved)
+- [Cluster Definition creation](#cluster-definition-creation)
+- [Carrying out the DKG ceremony](#carrying-out-the-dkg-ceremony)
+- [Backing up ceremony artifacts](#backing-up-the-ceremony-artifacts)
+- [Preparing for validator activation](#preparing-for-validator-activation)
+- [DKG verification](#dkg-verification)
+- [Appendix](#appendix)
+
+## Overview
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together. (4 randomly chosen points on a graph don't all neccessarily sit on the same order three curve.) To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a distributed key generation ceremony.
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://docs.obol.tech/docs/dv/distributed-validator-cluster-manifest).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign with this address's private key to authenticate their charon client ahead of the ceremony. The signature will be of; a hash of the charon clients ENR public key, the `cluster_definition_hash`, and an incrementing `nonce`, allowing for a direct linkage between a user, their charon client, and the cluster this client is intended to service, while retaining the ability to update the charon client by incrementing the nonce value and re-signing like the standard ENR spec.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p-noise). These keys need to be created by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This definition file is created with the help of the [Distributed Validator Launchpad](https://docs.obol.tech/docs/dvk/distributed_validator_launchpad). The creation process involves a number of steps.
+
+- A `leader` Operator, that wishes to co-ordinate the creation of a new Distributed Validator Cluster navigates to the launch pad and selects "Create new Cluster"
+- The `leader` uses the user interface to configure all of the important details about the cluster including:
+ - The `withdrawal address` for the created validators
+ - The `feeRecipient` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required (if not choosing the safe default)
+ - The network (fork_version/chainId) that this cluster will validate on
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialised and merklised to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that their is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the leader is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralised backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralisation of the launchpad.)
+- The leader will then share the URL to this ceremony with their intended participants.
+- Anyone that clicks the ceremony url, or inputs the `cluster_definition_hash` when prompted on the landing page will be brought to the ceremony status page. (After completing all disclaimers and advisories)
+- A "Connect Wallet" button will be visible beneath the ceremony status container, a participant can click on it to connect their wallet to the site
+ - If the participant connects a wallet that is not in the participant list, the button disables, as there is nothing to do
+ - If the participant connects a wallet that is in the participant list, they get prompted to input the ENR of their charon node.
+ - If the ENR field is populated and validated the participant can now see a "Confirm Cluster Configuration" button. This button triggers one/two signatures.
+ - The participant signs the `cluster_definition_hash`, to prove they are consensting to this exact configuration.
+ - The participant signs their charon node's ENR, to authenticate and authorise that specific charon node to participate on their behalf in the distributed validator cluster.
+ - These/this signature is sent to the data availability layer, where it verifies the signatures are correct for the given participants ethereum address. If the signatures pass validation, the signature of the definition hash and the the ENR + signature get saved to the definition object.
+- All participants in the list must sign the definition hash and submit a signed ENR before a DKG ceremony can begin. The outstanding signatures can be easily displayed on the status page.
+- Finally, once all participants have signed their approval, and submitted a charon node ENR to act on their behalf, the definition data can be downloaded as a file if the users click a newly displayed button, `Download Manifest`.
+- At this point each participant must load this definition into their charon client, and the client will attempt to complete the DKG.
+
+## Carrying out the DKG ceremony
+
+Once participant has their definition file prepared, they will pass the file to charon's `dkg` command. Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to bootnodes that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen be a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+```sh
+./cluster_definition.json # The original definition file from the DV Launchpad
+./cluster.lock # New lockfile based on cluster_definition.json with validator group public keys and threshold BLS verifiers included with the initial cluster config
+./charon/enr_private_key # Created before the ceremony took place [Back this up]
+./charon/validator_keys/ # Folder of key shares to be backed up and moved to validator client [Back this up]
+./charon/deposit_data # JSON file of deposit data for the distributed validators
+./charon/exit_data # JSON file of exit data that ethdo can broadcast
+```
+
+## Backing up the ceremony artifacts
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favour of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## Preparing for validator activation
+
+Once the ceremony is complete, and secure backups of key shares have been made by each operator. They must now load these key shares into their validator clients, and run the `charon run` command to turn it into operational mode.
+
+All operators should confirm that their charon client logs indicate all nodes are online and connected. They should also verify the readiness of their beacon clients and validator clients. Charon's grafana dashboard is a good way to see the readiness of the full cluster from its perspective.
+
+Once all operators are satisfied with network connectivity, one member can use the Obol Distributed Validator deposit flow to send the required ether and deposit data to the deposit contract, beginning the process of a distributed validator activation. Good luck.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it doe not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Using DKG without the launchpad
+
+Charon clients can do a DKG with a definition file that does not contain operator signatures if you pass a `--no-verify` flag to `charon dkg`. This can be used for testing purposes when strict signature verification is not of the utmost importance.
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../dv/08_distributed-validator-cluster-manifest.md#cluster-configuration-files).
+
diff --git a/docs/versioned_docs/version-v0.6.0/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.6.0/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..9ed29fc5db
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,15 @@
+---
+Description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Distributed Validator launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.6.0/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.6.0/dvk/03_dkg_cli_reference.md b/docs/versioned_docs/version-v0.6.0/dvk/03_dkg_cli_reference.md
new file mode 100644
index 0000000000..3d4f1dddeb
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dvk/03_dkg_cli_reference.md
@@ -0,0 +1,88 @@
+---
+Description: >-
+ A rust-based CLI client for hosting and participating in Distributed Validator key generation ceremonies.
+---
+
+# DKG CLI reference
+
+
+:::warning
+
+The `dkg-poc` client is a prototype implementation for generating Distributed Validator Keys. Keys generated with this tool will not work with Charon, and they are not suitable for use.
+
+:::
+
+The following is a reference for `dkg-poc` at commit [`6181fea`](https://github.com/ObolNetwork/dkg-poc/commit/6181feaab2f60bdaaec954f11c04ef49c0b3366a). Find the latest release on our [Github](https://github.com/ObolNetwork/dkg-poc).
+
+`dkg-poc` is implemented as a rust-based webserver for performing a distributed key generation ceremony. This deployment model ended up raising many user experience and security concerns, for example it is both hard and likely insecure to setup a TLS protected webserver at home if you are not a specialist in this area. Further, the PoC is based on an [Aggregatable DKG](https://github.com/kobigurk/aggregatable-dkg) library which is built on sharing a group element rather than a field element, which makes the threshold signing scheme more complex as a result. These factors resulted in a deprecation of this approach, with many valuable insights gained from this client. Currently a DV launchpad and charon based DKG flow serves as the intended [DKG architecture](https://github.com/ObolNetwork/charon/blob/main/docs/dkg.md) for creating Distributed Validator Clusters.
+
+```
+$ dkg-poc --help
+
+dkg-poc 0.1.0
+A Distributed Validator Key Generation client for the Obol Network.
+
+USAGE:
+ dkg-poc
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+SUBCOMMANDS:
+ help Prints this message or the help of the given subcommand(s)
+ lead Lead a new DKG ceremony
+ participate Participate in a DKG ceremony
+
+```
+
+```
+$ dkg-poc lead --help
+
+dkg-poc-lead 0.1.0
+Lead a new DKG ceremony
+
+USAGE:
+ dkg-poc lead [OPTIONS] --num-participants --threshold
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address
+ The address to bind this client to, to participate in the DKG ceremony (Default: 127.0.0.1:8081)
+
+ -e, --enr
+ Provide existing charon ENR for this participant instead of generating a new private key to import
+
+ -n, --num-participants The number of participants taking part in the DKG ceremony
+ -p, --password
+ Password to join the ceremony (Default is to randomly generate a password)
+
+ -t, --threshold
+ Sets the threshold at which point a group of shareholders can create valid signatures
+
+```
+
+```
+$ dkg-poc participate --help
+
+dkg-poc-participate 0.1.0
+Participate in a DKG ceremony
+
+USAGE:
+ dkg-poc participate [OPTIONS] --leader-address
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address The address to bind this client to, to participate in the DKG ceremony
+ (Default: 127.0.0.1:8081)
+ -e, --enr Provide existing charon ENR for this participant instead of generating a new
+ private key to import
+ -l, --leader-address The address of the webserver leading the DKG ceremony
+ -p, --password Password to join the ceremony (Default is to randomly generate a password)
+```
diff --git a/docs/versioned_docs/version-v0.6.0/dvk/README.md b/docs/versioned_docs/version-v0.6.0/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.6.0/fr/README.md b/docs/versioned_docs/version-v0.6.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.6.0/fr/eth.md b/docs/versioned_docs/version-v0.6.0/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.6.0/fr/golang.md b/docs/versioned_docs/version-v0.6.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.6.0/glossary.md b/docs/versioned_docs/version-v0.6.0/glossary.md
new file mode 100644
index 0000000000..87fbace906
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/glossary.md
@@ -0,0 +1,9 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.6.0/int/README.md b/docs/versioned_docs/version-v0.6.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.6.0/int/faq.md b/docs/versioned_docs/version-v0.6.0/int/faq.md
new file mode 100644
index 0000000000..ca366842bc
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/int/faq.md
@@ -0,0 +1,24 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
diff --git a/docs/versioned_docs/version-v0.6.0/int/key-concepts.md b/docs/versioned_docs/version-v0.6.0/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.6.0/int/overview.md b/docs/versioned_docs/version-v0.6.0/int/overview.md
new file mode 100644
index 0000000000..e178579dbd
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling main chain staking by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling main chain staking while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.6.0/int/quickstart.md b/docs/versioned_docs/version-v0.6.0/int/quickstart.md
new file mode 100644
index 0000000000..8085dcc39b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/int/quickstart.md
@@ -0,0 +1,69 @@
+---
+sidebar_position: 4
+description: Take part in a distributed validator cluster
+---
+
+# Quickstart
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+There are two ways to test out a distributed validator.
+
+* Running the full cluster alone.
+* Running one node in a cluster with a group of other node operators.
+
+## Run a cluster alone
+
+1. Clone the [starter repo](https://github.com/ObolNetwork/charon-docker-compose) and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-docker-compose.git
+
+ # Change directory
+ cd charon-docker-compose/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Create the artifacts needed to run a testnet distributed validator cluster
+
+ ```sh
+ # Create a testnet distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:latest create cluster --cluster-dir=".charon/cluster" --withdrawal-address="0x000000000000000000000000000000000000dead"
+ ```
+4. Start the cluster
+
+ ```sh
+ # Start the distributed validator cluster
+ docker-compose up
+ ```
+5. Checkout the monitoring dashboard and see if things look all right
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+6. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+
+Congratulations, if this all worked you are now running a distributed validator cluster on a testnet. Try turning off a single node of the four and see if the validator stays online or begins missing duties, to see for yourself the fault-tolerance that can be added to proof of stake validation with this new Distributed Validator Technology.
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it.\*
+
+\*_Once charon creates validator exit data in an upcoming release._ :::
+
+## Run a cluster with others
+
+This section will be completed alongside version `v0.7.0`. Sit tight.
diff --git a/docs/versioned_docs/version-v0.6.0/int/working-groups.md b/docs/versioned_docs/version-v0.6.0/int/working-groups.md
new file mode 100644
index 0000000000..1ebf4332a9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 5
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 60%
+- Phase 1: 25%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.6.0/intro.md b/docs/versioned_docs/version-v0.6.0/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.6.0/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.6.0/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..c1a650d6da
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://www.dropbox.com/s/z8kpyl5r2lh1ixe/Screenshot%202021-12-26%20at%2013.53.48.png?dl=0) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.6.0/sc/README.md b/docs/versioned_docs/version-v0.6.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.6.0/testnet.md b/docs/versioned_docs/version-v0.6.0/testnet.md
new file mode 100644
index 0000000000..9c8cce3f90
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.0/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs will be coordinating and hosting a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [ ] Dev Net 1
+* [ ] Dev Net 2
+* [ ] Athena Public Testnet 1
+* [ ] Bia Attack net
+* [ ] Circe Public Testnet 2
+* [ ] Demeter Red/Blue net
+
+### Devnet 1
+
+The first devnet aim will be to have a number of trusted operators test out our earliest tutorial flows. The aim will be for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with the option of adding a single consensus layer client from a weak subjectivity checkpoint (the default will be to connect to our Kiln RPC server, we shouldn't get too much load for this phase). The keys will be created locally in charon, and activated with the existing launchpad or ethdo.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a newtwork.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aim will be to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim will be for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators will need to expose charon to the public internet on a static IP address.
+
+This devnet will also be the first time `charon dkg` is tested with users. The launchpad is not anticipated to be complete, and this dkg will be triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet will be to collect network performance data. This will be the first time we will have charon run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet will be a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+This testnet would be intended to include the Distributed Validator Launchpad.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** June 2022
+
+**Duration:** 2 week setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** August 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** September 2022 ([Dappcon](https://www.dappcon.io/) runs 12th-14th of Sept. )
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** October 2022 ([Devcon 6](https://devcon.org/en/#road-to-devcon) runs 7th-16th of October. )
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v0.6.1/README.md b/docs/versioned_docs/version-v0.6.1/README.md
new file mode 100644
index 0000000000..487c559cff
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/README.md
@@ -0,0 +1,2 @@
+# version-v0.6.1
+
diff --git a/docs/versioned_docs/version-v0.6.1/cg/README.md b/docs/versioned_docs/version-v0.6.1/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.6.1/cg/bug-report.md b/docs/versioned_docs/version-v0.6.1/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/website/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/ the may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.6.1/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.6.1/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..85539998f3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dv/01_introducing-charon.md
@@ -0,0 +1,29 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.6.1/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run ghcr.io/obolnetwork/charon:v0.6.0 --help
+```
+
+For more information on running charon, take a look at our [quickstart guide](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.6.1/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.6.1/dv/02_validator-creation.md
new file mode 100644
index 0000000000..c4c39582b6
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dv/02_validator-creation.md
@@ -0,0 +1,31 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a _cluster configuration_.
+ * This operator also sets their charon client's Ethereum Node Record ([ENR](../int/faq.md#what-is-an-enr)).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a shareable URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * The submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms.
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster definition file.
+4. Every operator loads this cluster definition file into `charon dkg`. The definition provides the charon process with the information it needs to complete the DKG ceremony with the other charon clients.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A `cluster-lock.json` file, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated public key shares. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares, their ENR private key if they have not yet done so, and the `cluster-lock.json` file.
+7. All operators load the keys and cluster lockfiles generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.6.1/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.6.1/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..f8e8bad3b3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dv/04_middleware-daemon.md
@@ -0,0 +1,17 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware Architecture
+
+
+
+The Charon daemon sits as a middleware between the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/) and any downstream validator clients.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+The daemon offers a config reload instruction through Unix signals which is useful to join or leave Obol clusters on-the-fly without interruption.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
diff --git a/docs/versioned_docs/version-v0.6.1/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.6.1/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..9ea67f7faf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dv/06_peer-discovery.md
@@ -0,0 +1,37 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security and sybil-resistance, charon clients need to be able to authenticate one another. We achieve this by giving each charon client a public/private key pair that they can sign with such that other clients in the cluster will be able to recognise them as legitimate no matter which IP address they communicate from.
+
+At the end of a [DKG ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+### Authenticating a distributed validator client
+
+Before a DKG process begins, all operators must run `charon create enr`, or just `charon enr`, to create or get the Ethereum Node Record for their client. These ENRs are included in the configuration of a Distributed Key Generation ceremony.
+
+The file that outlines a DKG ceremony is known as a [`cluster-definition`](./08_distributed-validator-cluster-manifest.md) file. This file is passed to `charon dkg` which uses it to create private keys, a cluster lock file and deposit data for the configured number of distributed validators. The cluster-lock file will be made available to `charon run`, and the validator key stores will be made available to the configured validator client.
+
+When `charon run` starts up and ingests its configuration from the `cluster-lock.json` file, it checks if its observed/configured public IP address differs from what is listed in the lock file. If it is different; it updates the IP address, increments the nonce of the ENR and reissues it before beginning to establish connections with the other operators in the cluster.
+
+#### Node database
+
+Distributed Validator Clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely. This is acceptable as the number of operators in a cluster is expected to stay constant. Mutable cluster operators will be introduced in future.
+
+#### Node discovery
+
+At boot, a charon client will ingest it's configured `cluster-lock.json` file. This file contains a list of ENRs of the client's peers. The client will attempt to establish a connection with these peers, and will perform a handshake if they connect to establish an end to end encrypted communication channel between the clients.
+
+However, the IP addresses within an ENR can become stale. This could result in a cluster not being able to establish a connection with all nodes. To be tolerant to operator IP addresses changing, charon also supports the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) discovery protocol. This allows a charon client to find another operator that might have moved IP address, but still retains the same ENR private key.
+
+
diff --git a/docs/versioned_docs/version-v0.6.1/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.6.1/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..50de00d79a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dv/07_p2p-interface.md
@@ -0,0 +1,13 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Identity: [Ethereum Node Records](https://eips.ethereum.org/EIPS/eip-778).
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Each charon client must have their ENR public key authorized in a [cluster-lock.json](./08_distributed-validator-cluster-manifest.md) file in order for the client handshake to succeed.
+- Discovery: [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md).
diff --git a/docs/versioned_docs/version-v0.6.1/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.6.1/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..76a1a7b251
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,67 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Cluster Configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client (or cluster) locally or in production.
+
+## Cluster Configuration Files
+
+A charon cluster is configured in two steps:
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+The `charon create dkg` command is used to create `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The `charon create cluster` command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+The schema of the `cluster-definition.json` is defined as:
+```json
+{
+ "version": "v1.0.0", // Schema version
+ "num_validators": 100, // Number of validators to create in cluster.lock
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier
+ "name": "best cluster", // Optional name field, cosmetic.
+ "fee_recipient_address":"0x123..abfc",// ETH1 fee_recipient address
+ "withdrawal_address": "0x123..abfc", // ETH1 withdrawal address
+ "algorithm": "foo_dkg_v1" , // Optional DKG algorithm
+ "fork_version": "0x00112233", // Fork version lock, enum of known values
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 operator identify address
+ "enr": "enr://abcdef...12345", // charon client ENR
+ "signature": "123456...abcdef", // Signature of enr by ETH1 address priv key
+ "nonce": 1 // Nonce of signature
+ }
+ ],
+ "definition_hash": "abcdef...abcedef",// Hash of above field (except free text)
+ "operator_signatures": [ // Operator signatures (seals) of definition hash
+ "123456...abcdef",
+ "123456...abcdef"
+ ]
+}
+```
+
+The above `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+The `cluster-lock.json` has the following schema:
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equaled to num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "threshold_verifiers": [ "oA8Z...2XyT", "g1q...icu"], // length of threshold
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
diff --git a/docs/versioned_docs/version-v0.6.1/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.6.1/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..81b0ae5685
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dv/09_charon_cli_reference.md
@@ -0,0 +1,203 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.6.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.6.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+### Available Commands
+
+The following are the top-level commands available to use.
+
+```markdown
+charon help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Start a discv5 bootnode server
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print this client's Ethereum Node Record
+ help Help about any command
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+### `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+
+```
+
+#### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ -h, --help Help for enr
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+```
+
+#### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster.lock` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, and a cluster manifest. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
+ -h, --help Help for cluster
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Ethereum address to receive the returned stake and accrued rewards. (default "0x0000000000000000000000000000000000000000")
+```
+
+#### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, keycast, frost (default "default")
+ --fee-recipient-address string Optional Ethereum address of the fee recipient
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings Comma-separated list of each operator's Charon ENR address
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".")
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Withdrawal Ethereum address (default "0x0000000000000000000000000000000000000000")
+```
+
+### Performing a DKG Ceremony
+
+Th `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit and exit data for each new distributed validator. The command outputs the `cluster.lock` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ --definition-file string The path to the cluster definition file. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+```
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster.lock` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL (default "http://localhost/")
+ --data-dir string The directory where charon will store all its internal data (default ".charon/data")
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing
+ --jaeger-service string Service name used for jaeger tracing (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster (default ".charon/cluster-lock.json")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof) (default "127.0.0.1:16001")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:16003])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:16004")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API (default "127.0.0.1:16002")
+```
diff --git a/docs/versioned_docs/version-v0.6.1/dv/README.md b/docs/versioned_docs/version-v0.6.1/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.6.1/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.6.1/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..d90a96b4ed
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,121 @@
+---
+Description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+---
+
+# Distributed Validator Key Generation
+
+## Contents
+
+- [Overview](#overview)
+- [Actors involved](#actors-involved)
+- [Cluster Definition creation](#cluster-definition-creation)
+- [Carrying out the DKG ceremony](#carrying-out-the-dkg-ceremony)
+- [Backing up ceremony artifacts](#backing-up-the-ceremony-artifacts)
+- [Preparing for validator activation](#preparing-for-validator-activation)
+- [DKG verification](#dkg-verification)
+- [Appendix](#appendix)
+
+## Overview
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together. (4 randomly chosen points on a graph don't all neccessarily sit on the same order three curve.) To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a distributed key generation ceremony.
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://docs.obol.tech/docs/dv/distributed-validator-cluster-manifest).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign with this address's private key to authenticate their charon client ahead of the ceremony. The signature will be of; a hash of the charon clients ENR public key, the `cluster_definition_hash`, and an incrementing `nonce`, allowing for a direct linkage between a user, their charon client, and the cluster this client is intended to service, while retaining the ability to update the charon client by incrementing the nonce value and re-signing like the standard ENR spec.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p-noise). These keys need to be created by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This definition file is created with the help of the [Distributed Validator Launchpad](https://docs.obol.tech/docs/dvk/distributed_validator_launchpad). The creation process involves a number of steps.
+
+- A `leader` Operator, that wishes to co-ordinate the creation of a new Distributed Validator Cluster navigates to the launch pad and selects "Create new Cluster"
+- The `leader` uses the user interface to configure all of the important details about the cluster including:
+ - The `withdrawal address` for the created validators
+ - The `feeRecipient` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required (if not choosing the safe default)
+ - The network (fork_version/chainId) that this cluster will validate on
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialised and merklised to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that their is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the leader is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralised backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralisation of the launchpad.)
+- The leader will then share the URL to this ceremony with their intended participants.
+- Anyone that clicks the ceremony url, or inputs the `cluster_definition_hash` when prompted on the landing page will be brought to the ceremony status page. (After completing all disclaimers and advisories)
+- A "Connect Wallet" button will be visible beneath the ceremony status container, a participant can click on it to connect their wallet to the site
+ - If the participant connects a wallet that is not in the participant list, the button disables, as there is nothing to do
+ - If the participant connects a wallet that is in the participant list, they get prompted to input the ENR of their charon node.
+ - If the ENR field is populated and validated the participant can now see a "Confirm Cluster Configuration" button. This button triggers one/two signatures.
+ - The participant signs the `cluster_definition_hash`, to prove they are consensting to this exact configuration.
+ - The participant signs their charon node's ENR, to authenticate and authorise that specific charon node to participate on their behalf in the distributed validator cluster.
+ - These/this signature is sent to the data availability layer, where it verifies the signatures are correct for the given participants ethereum address. If the signatures pass validation, the signature of the definition hash and the the ENR + signature get saved to the definition object.
+- All participants in the list must sign the definition hash and submit a signed ENR before a DKG ceremony can begin. The outstanding signatures can be easily displayed on the status page.
+- Finally, once all participants have signed their approval, and submitted a charon node ENR to act on their behalf, the definition data can be downloaded as a file if the users click a newly displayed button, `Download Manifest`.
+- At this point each participant must load this definition into their charon client, and the client will attempt to complete the DKG.
+
+## Carrying out the DKG ceremony
+
+Once participant has their definition file prepared, they will pass the file to charon's `dkg` command. Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to bootnodes that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen be a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+```sh
+# Common data
+.cluster-definition.json # The original definition file from the DV Launchpad or `charon create dkg`
+.cluster-lock.json # New lockfile based on cluster-definition.json with validator group public keys and threshold BLS verifiers included with the initial cluster config
+.charon/deposit-data.json # JSON file of deposit data for the distributed validators
+
+# Sensitive operator-specific data
+.charon/charon-enr-private-key # Created before the ceremony took place [Back this up]
+.charon/validator_keys/ # Folder of key shares to be backed up and moved to validator client [Back this up]
+```
+
+## Backing up the ceremony artifacts
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favour of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## Preparing for validator activation
+
+Once the ceremony is complete, and secure backups of key shares have been made by each operator. They must now load these key shares into their validator clients, and run the `charon run` command to turn it into operational mode.
+
+All operators should confirm that their charon client logs indicate all nodes are online and connected. They should also verify the readiness of their beacon clients and validator clients. Charon's grafana dashboard is a good way to see the readiness of the full cluster from its perspective.
+
+Once all operators are satisfied with network connectivity, one member can use the Obol Distributed Validator deposit flow to send the required ether and deposit data to the deposit contract, beginning the process of a distributed validator activation. Good luck.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it doe not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Using DKG without the launchpad
+
+Charon clients can do a DKG with a definition file that does not contain operator signatures if you pass a `--no-verify` flag to `charon dkg`. This can be used for testing purposes when strict signature verification is not of the utmost importance.
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../dv/08_distributed-validator-cluster-manifest.md#cluster-configuration-files).
+
diff --git a/docs/versioned_docs/version-v0.6.1/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.6.1/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..8c678d27a6
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,15 @@
+---
+Description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Distributed Validator launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.6.1/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.6.1/dvk/03_dkg_cli_reference.md b/docs/versioned_docs/version-v0.6.1/dvk/03_dkg_cli_reference.md
new file mode 100644
index 0000000000..bdcff5bd77
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dvk/03_dkg_cli_reference.md
@@ -0,0 +1,88 @@
+---
+Description: >-
+ A rust-based CLI client for hosting and participating in Distributed Validator key generation ceremonies.
+---
+
+# DKG CLI reference
+
+
+:::warning
+
+The `dkg-poc` client is a prototype implementation for generating Distributed Validator Keys. Keys generated with this tool will not work with Charon, and they are not suitable for use. Creating keys for a Distributed Validator should use the [`charon create dkg` command](../dv/09_charon_cli_reference.md#creating-the-configuration-for-a-dkg-ceremony).
+
+:::
+
+The following is a reference for `dkg-poc` at commit [`6181fea`](https://github.com/ObolNetwork/dkg-poc/commit/6181feaab2f60bdaaec954f11c04ef49c0b3366a). Find the latest release on our [Github](https://github.com/ObolNetwork/dkg-poc).
+
+`dkg-poc` is implemented as a rust-based webserver for performing a distributed key generation ceremony. This deployment model ended up raising many user experience and security concerns, for example it is both hard and likely insecure to setup a TLS protected webserver at home if you are not a specialist in this area. Further, the PoC is based on an [Aggregatable DKG](https://github.com/kobigurk/aggregatable-dkg) library which is built on sharing a group element rather than a field element, which makes the threshold signing scheme more complex as a result. These factors resulted in a deprecation of this approach, with many valuable insights gained from this client. Currently a DV launchpad and charon based DKG flow serves as the intended [DKG architecture](https://github.com/ObolNetwork/charon/blob/main/docs/dkg.md) for creating Distributed Validator Clusters.
+
+```
+$ dkg-poc --help
+
+dkg-poc 0.1.0
+A Distributed Validator Key Generation client for the Obol Network.
+
+USAGE:
+ dkg-poc
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+SUBCOMMANDS:
+ help Prints this message or the help of the given subcommand(s)
+ lead Lead a new DKG ceremony
+ participate Participate in a DKG ceremony
+
+```
+
+```
+$ dkg-poc lead --help
+
+dkg-poc-lead 0.1.0
+Lead a new DKG ceremony
+
+USAGE:
+ dkg-poc lead [OPTIONS] --num-participants --threshold
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address
+ The address to bind this client to, to participate in the DKG ceremony (Default: 127.0.0.1:8081)
+
+ -e, --enr
+ Provide existing charon ENR for this participant instead of generating a new private key to import
+
+ -n, --num-participants The number of participants taking part in the DKG ceremony
+ -p, --password
+ Password to join the ceremony (Default is to randomly generate a password)
+
+ -t, --threshold
+ Sets the threshold at which point a group of shareholders can create valid signatures
+
+```
+
+```
+$ dkg-poc participate --help
+
+dkg-poc-participate 0.1.0
+Participate in a DKG ceremony
+
+USAGE:
+ dkg-poc participate [OPTIONS] --leader-address
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address The address to bind this client to, to participate in the DKG ceremony
+ (Default: 127.0.0.1:8081)
+ -e, --enr Provide existing charon ENR for this participant instead of generating a new
+ private key to import
+ -l, --leader-address The address of the webserver leading the DKG ceremony
+ -p, --password Password to join the ceremony (Default is to randomly generate a password)
+```
diff --git a/docs/versioned_docs/version-v0.6.1/dvk/README.md b/docs/versioned_docs/version-v0.6.1/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.6.1/fr/README.md b/docs/versioned_docs/version-v0.6.1/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.6.1/fr/eth.md b/docs/versioned_docs/version-v0.6.1/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.6.1/fr/golang.md b/docs/versioned_docs/version-v0.6.1/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.6.1/glossary.md b/docs/versioned_docs/version-v0.6.1/glossary.md
new file mode 100644
index 0000000000..53bb274c27
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/glossary.md
@@ -0,0 +1,8 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.6.1/int/README.md b/docs/versioned_docs/version-v0.6.1/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.6.1/int/faq.md b/docs/versioned_docs/version-v0.6.1/int/faq.md
new file mode 100644
index 0000000000..3a93900fd3
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/faq.md
@@ -0,0 +1,33 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+### What is an ENR?
+
+An ENR is shorthand for an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778). It is a way to represent a node on a public network, with a reliable mechanism to update its information. At Obol we use ENRs to identify charon nodes to one another such that they can form clusters with the right charon nodes and not impostors.
+
+ENRs have private keys they use to sign updates to the [data contained](https://enr-viewer.com/) in their ENR. This private key is by default found at `.charon/charon-enr-private-key`, and should be kept secure, and not checked into version control. An ENR looks something like this:
+```
+enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+```
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
diff --git a/docs/versioned_docs/version-v0.6.1/int/key-concepts.md b/docs/versioned_docs/version-v0.6.1/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.6.1/int/overview.md b/docs/versioned_docs/version-v0.6.1/int/overview.md
new file mode 100644
index 0000000000..e178579dbd
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling main chain staking by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling main chain staking while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.6.1/int/quickstart/README.md b/docs/versioned_docs/version-v0.6.1/int/quickstart/README.md
new file mode 100644
index 0000000000..bd2483c7cf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/quickstart/README.md
@@ -0,0 +1,2 @@
+# quickstart
+
diff --git a/docs/versioned_docs/version-v0.6.1/int/quickstart/index.md b/docs/versioned_docs/version-v0.6.1/int/quickstart/index.md
new file mode 100644
index 0000000000..bc22286f06
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/quickstart/index.md
@@ -0,0 +1,12 @@
+# Quickstart Guides
+
+:::warning
+Charon is in an early alpha state and is not ready to be run on mainnet
+:::
+
+There are two ways to test out a distributed validator; on your own, by running all of the required software as containers within docker, or you can run a distributed validator with a group of other node operators, where you each run only one validator client and charon client, and the charon clients communicate with one another over the public internet to operate the distributed validator. The second manner requires each operator to open a port on the internet for all charon nodes to communicate with one another optimally.
+
+The following are guides to getting started with our template repositories. The intention is to support every combination of beacon clients and validator clients with compose files.
+
+- [Running the full cluster alone.](./quickstart-alone.md)
+- [Running one node in a cluster with a group of other node operators.](./quickstart-group.md)
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.6.1/int/quickstart/quickstart-alone.md b/docs/versioned_docs/version-v0.6.1/int/quickstart/quickstart-alone.md
new file mode 100644
index 0000000000..20f814f96e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/quickstart/quickstart-alone.md
@@ -0,0 +1,56 @@
+---
+sidebar_position: 4
+description: Run all nodes in a distributed validator cluster
+---
+
+# Run a cluster alone
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) template repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Create the artifacts needed to run a testnet distributed validator cluster
+
+ ```sh
+ # Create a testnet distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:latest create cluster --cluster-dir=".charon" --withdrawal-address="0x000000000000000000000000000000000000dead"
+ ```
+4. Start the cluster
+
+ ```sh
+ # Start the distributed validator cluster
+ docker-compose up
+ ```
+5. Checkout the monitoring dashboard and see if things look all right
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+6. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+
+Congratulations, if this all worked you are now running a distributed validator cluster on a testnet. Try turning off a single node of the four with `docker stop` and see if the validator stays online or begins missing duties, to see for yourself the fault-tolerance that can be added to proof of stake validation with this new Distributed Validator Technology.
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.6.1/int/quickstart/quickstart-group.md b/docs/versioned_docs/version-v0.6.1/int/quickstart/quickstart-group.md
new file mode 100644
index 0000000000..78f022ca7c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/quickstart/quickstart-group.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 5
+description: Run one node in a multi-operator distributed validator cluster
+---
+
+# Run a cluster with others
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+To create a distributed validator cluster with a group of other node operators requires five key steps:
+
+* Every operator prepares their software and gets their charon client's [ENR](../faq.md#what-is-an-enr)
+* One operator prepares the terms of the distributed validator key generation ceremony
+ * They select the network, the withdrawal address, the number of 32 ether distributed validators to create, and the ENRs of each operator taking part in the ceremony.
+ * In future, the DV launchpad will facilitate this process more seamlessly, with consent on the terms provided by all operators that participate.
+* Every operator participates in the DKG ceremony, and once successful, a number of cluster artifacts are created, including:
+ * The private key shares for each distributed validator
+ * The deposit data file containing deposit details for each distributed validator
+ * A `cluster-lock.json` file which contains the finalised terms of this cluster required by charon to operate.
+* Every operator starts their node with `charon run`, and uses their monitoring to determine the cluster health and connectivity
+* Once the cluster is confirmed to be healthy, deposit data files created during this process are activated on the [staking launchpad](https://launchpad.ethereum.org/).
+
+## Getting started with Charon
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-cluster) template repository from Github, and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+ # Change directory
+ cd charon-distributed-validator-node/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Now create a private key for charon to use for its ENR
+
+ ```sh
+ # Create an ENR private key
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:latest create enr
+ ```
+
+ :::warning The ability to replace a deleted or compromised private key is limited at this point. Please make a secure backup of this private key if this distributed validator is important to you.\
+ ::: This command will print your charon client's ENR to the command line. It should look something like:
+
+ ```
+ enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+ ```
+
+ This record identifies your charon client no matter where it communicates from across the internet. It is required for the following step of creating a set of distributed validator private key shares amongst the cluster operators.
+
+## Performing a Distributed Validator Key Generation Ceremony
+
+To create the private keys for a distributed validator securely, a Distributed Key Generation (DKG) process must take place.
+
+1. After gathering each operators ENR and setting them in the `.env` file, one operator should prepare the ceremony with `charon create dkg`
+
+ ```sh
+
+ # First set the ENRs of all the operators participating in DKG ceremony in .env file as CHARON_OPERATOR_ENRS
+
+ # Create .charon/cluster-definition.json to participate in DKG ceremony
+ docker run --rm -v "$(pwd):/opt/charon" --env-file .env ghcr.io/obolnetwork/charon:latest create dkg
+ ```
+2. The operator that ran this command should distribute the resulting `cluster-definition.json` file to each operator.
+3. At a pre-agreed time, all operators run the ceremony program with the `charon dkg` command
+
+ ```sh
+ # Copy the cluster-definition.json file to .charon
+ cp cluster-definition.json .charon/
+
+ # Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys/
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:latest dkg
+ ```
+
+## Verifying cluster health
+
+Once the key generation ceremony has been completed, the charon nodes have the data they need to come together to form a cluster.
+
+1. Start your distributed validator node with docker-compose
+
+ ```sh
+ # Run a charon client, a vc client, and prom+grafana clients as containers
+ docker-compose up
+ ```
+2. Use the pre-prepared [grafana](http://localhost:3000/) dashboard to verify the cluster health looks okay. You should see connections with all other operators in the cluster as healthy, and observed ping times under 1 second for all connections.
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+
+## Activating the distributed validator
+
+Once the cluster is healthy and fully connected, it is time to deposit the required 32 (test) ether to activate the newly created Distributed Validator.
+
+1. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+ * A more distributed validator friendly deposit interface is in the works for an upcoming release.
+2. This process takes approximately 16 hours for the deposit to be registered on the beacon chain. Future upgrades to the protocol aims to reduce this time.
+3. Once the validator deposit is recognised on the beacon chain, the validator is assigned an index, and the wait for activation begins.
+4. Finally, once the validator is activated, it should be monitored for to ensure it is achieving an inclusion distance of near 0, to ensure optimal rewards. You should also tweet the link to your newly activated validator with the hashtag [#RunDVT](https://twitter.com/search?q=%2523RunDVT) 🙃
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.6.1/int/working-groups.md b/docs/versioned_docs/version-v0.6.1/int/working-groups.md
new file mode 100644
index 0000000000..1ebf4332a9
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 5
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 60%
+- Phase 1: 25%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.6.1/intro.md b/docs/versioned_docs/version-v0.6.1/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.6.1/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.6.1/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..c1a650d6da
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://www.dropbox.com/s/z8kpyl5r2lh1ixe/Screenshot%202021-12-26%20at%2013.53.48.png?dl=0) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.6.1/sc/README.md b/docs/versioned_docs/version-v0.6.1/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.6.1/testnet.md b/docs/versioned_docs/version-v0.6.1/testnet.md
new file mode 100644
index 0000000000..5cac937f49
--- /dev/null
+++ b/docs/versioned_docs/version-v0.6.1/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs will be coordinating and hosting a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [x] Dev Net 1
+* [ ] Dev Net 2
+* [ ] Athena Public Testnet 1
+* [ ] Bia Attack net
+* [ ] Circe Public Testnet 2
+* [ ] Demeter Red/Blue net
+
+### Devnet 1
+
+The first devnet aim will be to have a number of trusted operators test out our earliest tutorial flows. The aim will be for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with the option of adding a single consensus layer client from a weak subjectivity checkpoint (the default will be to connect to our Kiln RPC server, we shouldn't get too much load for this phase). The keys will be created locally in charon, and activated with the existing launchpad or ethdo.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a newtwork.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aim will be to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim will be for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators will need to expose charon to the public internet on a static IP address.
+
+This devnet will also be the first time `charon dkg` is tested with users. The launchpad is not anticipated to be complete, and this dkg will be triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet will be to collect network performance data. This will be the first time we will have charon run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet will be a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+This testnet would be intended to include the Distributed Validator Launchpad.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** June 2022
+
+**Duration:** 2 week setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** August 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** September 2022 ([Dappcon](https://www.dappcon.io/) runs 12th-14th of Sept. )
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** October 2022 ([Devcon 6](https://devcon.org/en/#road-to-devcon) runs 7th-16th of October. )
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v0.7.0/README.md b/docs/versioned_docs/version-v0.7.0/README.md
new file mode 100644
index 0000000000..865146ca9d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.7.0
+
diff --git a/docs/versioned_docs/version-v0.7.0/cg/README.md b/docs/versioned_docs/version-v0.7.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.7.0/cg/bug-report.md b/docs/versioned_docs/version-v0.7.0/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.7.0/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.7.0/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..9edd7a68fb
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dv/01_introducing-charon.md
@@ -0,0 +1,29 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.7.0/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run ghcr.io/obolnetwork/charon:v0.7.0 --help
+```
+
+For more information on running charon, take a look at our [quickstart guide](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.7.0/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.7.0/dv/02_validator-creation.md
new file mode 100644
index 0000000000..f13437b26d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dv/02_validator-creation.md
@@ -0,0 +1,31 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a _cluster configuration_.
+ * This operator also sets their charon client's Ethereum Node Record ([ENR](../int/faq.md#what-is-an-enr)).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a shareable URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * They submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms.
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster definition file.
+4. Every operator passes this cluster definition file to the `charon dkg` command. The definition provides the charon process with the information it needs to find and complete the DKG ceremony with the other charon clients involved.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A `cluster-lock.json` file, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated public key shares. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares, their ENR private key if they have not yet done so, and the `cluster-lock.json` file.
+7. All operators load the keys and cluster lockfiles generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.7.0/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.7.0/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..eddc58cf9e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dv/04_middleware-daemon.md
@@ -0,0 +1,15 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware Architecture
+
+
+
+The Charon daemon sits as a middleware between the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/) and any downstream validator clients.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
diff --git a/docs/versioned_docs/version-v0.7.0/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.7.0/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..9ea67f7faf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dv/06_peer-discovery.md
@@ -0,0 +1,37 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security and sybil-resistance, charon clients need to be able to authenticate one another. We achieve this by giving each charon client a public/private key pair that they can sign with such that other clients in the cluster will be able to recognise them as legitimate no matter which IP address they communicate from.
+
+At the end of a [DKG ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+### Authenticating a distributed validator client
+
+Before a DKG process begins, all operators must run `charon create enr`, or just `charon enr`, to create or get the Ethereum Node Record for their client. These ENRs are included in the configuration of a Distributed Key Generation ceremony.
+
+The file that outlines a DKG ceremony is known as a [`cluster-definition`](./08_distributed-validator-cluster-manifest.md) file. This file is passed to `charon dkg` which uses it to create private keys, a cluster lock file and deposit data for the configured number of distributed validators. The cluster-lock file will be made available to `charon run`, and the validator key stores will be made available to the configured validator client.
+
+When `charon run` starts up and ingests its configuration from the `cluster-lock.json` file, it checks if its observed/configured public IP address differs from what is listed in the lock file. If it is different; it updates the IP address, increments the nonce of the ENR and reissues it before beginning to establish connections with the other operators in the cluster.
+
+#### Node database
+
+Distributed Validator Clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely. This is acceptable as the number of operators in a cluster is expected to stay constant. Mutable cluster operators will be introduced in future.
+
+#### Node discovery
+
+At boot, a charon client will ingest it's configured `cluster-lock.json` file. This file contains a list of ENRs of the client's peers. The client will attempt to establish a connection with these peers, and will perform a handshake if they connect to establish an end to end encrypted communication channel between the clients.
+
+However, the IP addresses within an ENR can become stale. This could result in a cluster not being able to establish a connection with all nodes. To be tolerant to operator IP addresses changing, charon also supports the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) discovery protocol. This allows a charon client to find another operator that might have moved IP address, but still retains the same ENR private key.
+
+
diff --git a/docs/versioned_docs/version-v0.7.0/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.7.0/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..50de00d79a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dv/07_p2p-interface.md
@@ -0,0 +1,13 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Identity: [Ethereum Node Records](https://eips.ethereum.org/EIPS/eip-778).
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Each charon client must have their ENR public key authorized in a [cluster-lock.json](./08_distributed-validator-cluster-manifest.md) file in order for the client handshake to succeed.
+- Discovery: [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md).
diff --git a/docs/versioned_docs/version-v0.7.0/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.7.0/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..c81f9f6fb2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,66 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Cluster Configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client (or cluster) locally or in production.
+
+## Cluster Configuration Files
+
+A charon cluster is configured in two steps:
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+The `charon create dkg` command is used to create `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The `charon create cluster` command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+The schema of the `cluster-definition.json` is defined as:
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "nonce": 1, // Nonce (incremented each time the ENR is added/signed)
+ "config_signature": "123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.0.0", // Schema version
+ "num_validators": 100, // Number of distributed validators to be created in cluster.lock
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "fee_recipient_address":"0x123..abfc", // ETH1 fee_recipient address
+ "withdrawal_address": "0x123..abfc", // ETH1 withdrawal address
+ "dkg_algorithm": "foo_dkg_v1" , // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "abcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "abcdef...abcedef" // Final Hash of all fields
+}
+```
+
+
+The above `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+The `cluster-lock.json` has the following schema:
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equaled to num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "oA8Z...2XyT", "g1q...icu"], // Public Key Shares
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
diff --git a/docs/versioned_docs/version-v0.7.0/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.7.0/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..cd09e21841
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dv/09_charon_cli_reference.md
@@ -0,0 +1,203 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.7.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.7.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+### Available Commands
+
+The following are the top-level commands available to use.
+
+```markdown
+charon help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Start a discv5 bootnode server
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print this client's Ethereum Node Record
+ help Help about any command
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+### `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+
+```
+
+#### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+```
+
+#### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster.lock` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, and a cluster manifest. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
+ -h, --help Help for cluster
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Ethereum address to receive the returned stake and accrued rewards. (default "0x0000000000000000000000000000000000000000")
+```
+
+#### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, keycast, frost (default "default")
+ --fee-recipient-address string Optional Ethereum address of the fee recipient
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings Comma-separated list of each operator's Charon ENR address
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Withdrawal Ethereum address (default "0x0000000000000000000000000000000000000000")
+```
+
+### Performing a DKG Ceremony
+
+Th `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit and exit data for each new distributed validator. The command outputs the `cluster.lock` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+```
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster.lock` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL (default "http://localhost/")
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing
+ --jaeger-service string Service name used for jaeger tracing (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster (default ".charon/cluster-lock.json")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof) (default "127.0.0.1:3620")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:16000/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API (default "127.0.0.1:3600")
+```
diff --git a/docs/versioned_docs/version-v0.7.0/dv/README.md b/docs/versioned_docs/version-v0.7.0/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.7.0/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.7.0/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..d90a96b4ed
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,121 @@
+---
+Description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+---
+
+# Distributed Validator Key Generation
+
+## Contents
+
+- [Overview](#overview)
+- [Actors involved](#actors-involved)
+- [Cluster Definition creation](#cluster-definition-creation)
+- [Carrying out the DKG ceremony](#carrying-out-the-dkg-ceremony)
+- [Backing up ceremony artifacts](#backing-up-the-ceremony-artifacts)
+- [Preparing for validator activation](#preparing-for-validator-activation)
+- [DKG verification](#dkg-verification)
+- [Appendix](#appendix)
+
+## Overview
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together. (4 randomly chosen points on a graph don't all neccessarily sit on the same order three curve.) To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a distributed key generation ceremony.
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://docs.obol.tech/docs/dv/distributed-validator-cluster-manifest).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign with this address's private key to authenticate their charon client ahead of the ceremony. The signature will be of; a hash of the charon clients ENR public key, the `cluster_definition_hash`, and an incrementing `nonce`, allowing for a direct linkage between a user, their charon client, and the cluster this client is intended to service, while retaining the ability to update the charon client by incrementing the nonce value and re-signing like the standard ENR spec.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p-noise). These keys need to be created by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This definition file is created with the help of the [Distributed Validator Launchpad](https://docs.obol.tech/docs/dvk/distributed_validator_launchpad). The creation process involves a number of steps.
+
+- A `leader` Operator, that wishes to co-ordinate the creation of a new Distributed Validator Cluster navigates to the launch pad and selects "Create new Cluster"
+- The `leader` uses the user interface to configure all of the important details about the cluster including:
+ - The `withdrawal address` for the created validators
+ - The `feeRecipient` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required (if not choosing the safe default)
+ - The network (fork_version/chainId) that this cluster will validate on
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialised and merklised to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that their is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the leader is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralised backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralisation of the launchpad.)
+- The leader will then share the URL to this ceremony with their intended participants.
+- Anyone that clicks the ceremony url, or inputs the `cluster_definition_hash` when prompted on the landing page will be brought to the ceremony status page. (After completing all disclaimers and advisories)
+- A "Connect Wallet" button will be visible beneath the ceremony status container, a participant can click on it to connect their wallet to the site
+ - If the participant connects a wallet that is not in the participant list, the button disables, as there is nothing to do
+ - If the participant connects a wallet that is in the participant list, they get prompted to input the ENR of their charon node.
+ - If the ENR field is populated and validated the participant can now see a "Confirm Cluster Configuration" button. This button triggers one/two signatures.
+ - The participant signs the `cluster_definition_hash`, to prove they are consensting to this exact configuration.
+ - The participant signs their charon node's ENR, to authenticate and authorise that specific charon node to participate on their behalf in the distributed validator cluster.
+ - These/this signature is sent to the data availability layer, where it verifies the signatures are correct for the given participants ethereum address. If the signatures pass validation, the signature of the definition hash and the the ENR + signature get saved to the definition object.
+- All participants in the list must sign the definition hash and submit a signed ENR before a DKG ceremony can begin. The outstanding signatures can be easily displayed on the status page.
+- Finally, once all participants have signed their approval, and submitted a charon node ENR to act on their behalf, the definition data can be downloaded as a file if the users click a newly displayed button, `Download Manifest`.
+- At this point each participant must load this definition into their charon client, and the client will attempt to complete the DKG.
+
+## Carrying out the DKG ceremony
+
+Once participant has their definition file prepared, they will pass the file to charon's `dkg` command. Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to bootnodes that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen be a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+```sh
+# Common data
+.cluster-definition.json # The original definition file from the DV Launchpad or `charon create dkg`
+.cluster-lock.json # New lockfile based on cluster-definition.json with validator group public keys and threshold BLS verifiers included with the initial cluster config
+.charon/deposit-data.json # JSON file of deposit data for the distributed validators
+
+# Sensitive operator-specific data
+.charon/charon-enr-private-key # Created before the ceremony took place [Back this up]
+.charon/validator_keys/ # Folder of key shares to be backed up and moved to validator client [Back this up]
+```
+
+## Backing up the ceremony artifacts
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favour of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## Preparing for validator activation
+
+Once the ceremony is complete, and secure backups of key shares have been made by each operator. They must now load these key shares into their validator clients, and run the `charon run` command to turn it into operational mode.
+
+All operators should confirm that their charon client logs indicate all nodes are online and connected. They should also verify the readiness of their beacon clients and validator clients. Charon's grafana dashboard is a good way to see the readiness of the full cluster from its perspective.
+
+Once all operators are satisfied with network connectivity, one member can use the Obol Distributed Validator deposit flow to send the required ether and deposit data to the deposit contract, beginning the process of a distributed validator activation. Good luck.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it doe not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Using DKG without the launchpad
+
+Charon clients can do a DKG with a definition file that does not contain operator signatures if you pass a `--no-verify` flag to `charon dkg`. This can be used for testing purposes when strict signature verification is not of the utmost importance.
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../dv/08_distributed-validator-cluster-manifest.md#cluster-configuration-files).
+
diff --git a/docs/versioned_docs/version-v0.7.0/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.7.0/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..2bc7b3251e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,15 @@
+---
+Description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Distributed Validator launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.7.0/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.7.0/dvk/03_dkg_cli_reference.md b/docs/versioned_docs/version-v0.7.0/dvk/03_dkg_cli_reference.md
new file mode 100644
index 0000000000..bdcff5bd77
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dvk/03_dkg_cli_reference.md
@@ -0,0 +1,88 @@
+---
+Description: >-
+ A rust-based CLI client for hosting and participating in Distributed Validator key generation ceremonies.
+---
+
+# DKG CLI reference
+
+
+:::warning
+
+The `dkg-poc` client is a prototype implementation for generating Distributed Validator Keys. Keys generated with this tool will not work with Charon, and they are not suitable for use. Creating keys for a Distributed Validator should use the [`charon create dkg` command](../dv/09_charon_cli_reference.md#creating-the-configuration-for-a-dkg-ceremony).
+
+:::
+
+The following is a reference for `dkg-poc` at commit [`6181fea`](https://github.com/ObolNetwork/dkg-poc/commit/6181feaab2f60bdaaec954f11c04ef49c0b3366a). Find the latest release on our [Github](https://github.com/ObolNetwork/dkg-poc).
+
+`dkg-poc` is implemented as a rust-based webserver for performing a distributed key generation ceremony. This deployment model ended up raising many user experience and security concerns, for example it is both hard and likely insecure to setup a TLS protected webserver at home if you are not a specialist in this area. Further, the PoC is based on an [Aggregatable DKG](https://github.com/kobigurk/aggregatable-dkg) library which is built on sharing a group element rather than a field element, which makes the threshold signing scheme more complex as a result. These factors resulted in a deprecation of this approach, with many valuable insights gained from this client. Currently a DV launchpad and charon based DKG flow serves as the intended [DKG architecture](https://github.com/ObolNetwork/charon/blob/main/docs/dkg.md) for creating Distributed Validator Clusters.
+
+```
+$ dkg-poc --help
+
+dkg-poc 0.1.0
+A Distributed Validator Key Generation client for the Obol Network.
+
+USAGE:
+ dkg-poc
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+SUBCOMMANDS:
+ help Prints this message or the help of the given subcommand(s)
+ lead Lead a new DKG ceremony
+ participate Participate in a DKG ceremony
+
+```
+
+```
+$ dkg-poc lead --help
+
+dkg-poc-lead 0.1.0
+Lead a new DKG ceremony
+
+USAGE:
+ dkg-poc lead [OPTIONS] --num-participants --threshold
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address
+ The address to bind this client to, to participate in the DKG ceremony (Default: 127.0.0.1:8081)
+
+ -e, --enr
+ Provide existing charon ENR for this participant instead of generating a new private key to import
+
+ -n, --num-participants The number of participants taking part in the DKG ceremony
+ -p, --password
+ Password to join the ceremony (Default is to randomly generate a password)
+
+ -t, --threshold
+ Sets the threshold at which point a group of shareholders can create valid signatures
+
+```
+
+```
+$ dkg-poc participate --help
+
+dkg-poc-participate 0.1.0
+Participate in a DKG ceremony
+
+USAGE:
+ dkg-poc participate [OPTIONS] --leader-address
+
+FLAGS:
+ -h, --help Prints help information
+ -V, --version Prints version information
+
+OPTIONS:
+ -a, --address The address to bind this client to, to participate in the DKG ceremony
+ (Default: 127.0.0.1:8081)
+ -e, --enr Provide existing charon ENR for this participant instead of generating a new
+ private key to import
+ -l, --leader-address The address of the webserver leading the DKG ceremony
+ -p, --password Password to join the ceremony (Default is to randomly generate a password)
+```
diff --git a/docs/versioned_docs/version-v0.7.0/dvk/README.md b/docs/versioned_docs/version-v0.7.0/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.7.0/fr/README.md b/docs/versioned_docs/version-v0.7.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.7.0/fr/eth.md b/docs/versioned_docs/version-v0.7.0/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.7.0/fr/golang.md b/docs/versioned_docs/version-v0.7.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.7.0/glossary.md b/docs/versioned_docs/version-v0.7.0/glossary.md
new file mode 100644
index 0000000000..53bb274c27
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/glossary.md
@@ -0,0 +1,8 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.7.0/int/README.md b/docs/versioned_docs/version-v0.7.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.7.0/int/faq.md b/docs/versioned_docs/version-v0.7.0/int/faq.md
new file mode 100644
index 0000000000..d5e60f818f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/faq.md
@@ -0,0 +1,37 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+### What is an ENR?
+
+An ENR is shorthand for an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778). It is a way to represent a node on a public network, with a reliable mechanism to update its information. At Obol we use ENRs to identify charon nodes to one another such that they can form clusters with the right charon nodes and not impostors.
+
+ENRs have private keys they use to sign updates to the [data contained](https://enr-viewer.com/) in their ENR. This private key is by default found at `.charon/charon-enr-private-key`, and should be kept secure, and not checked into version control. An ENR looks something like this:
+```
+enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+```
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
+
+### What's with the name Charon?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.7.0/int/key-concepts.md b/docs/versioned_docs/version-v0.7.0/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.7.0/int/overview.md b/docs/versioned_docs/version-v0.7.0/int/overview.md
new file mode 100644
index 0000000000..e178579dbd
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling main chain staking by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling main chain staking while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.7.0/int/quickstart/README.md b/docs/versioned_docs/version-v0.7.0/int/quickstart/README.md
new file mode 100644
index 0000000000..bd2483c7cf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/quickstart/README.md
@@ -0,0 +1,2 @@
+# quickstart
+
diff --git a/docs/versioned_docs/version-v0.7.0/int/quickstart/index.md b/docs/versioned_docs/version-v0.7.0/int/quickstart/index.md
new file mode 100644
index 0000000000..bc22286f06
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/quickstart/index.md
@@ -0,0 +1,12 @@
+# Quickstart Guides
+
+:::warning
+Charon is in an early alpha state and is not ready to be run on mainnet
+:::
+
+There are two ways to test out a distributed validator; on your own, by running all of the required software as containers within docker, or you can run a distributed validator with a group of other node operators, where you each run only one validator client and charon client, and the charon clients communicate with one another over the public internet to operate the distributed validator. The second manner requires each operator to open a port on the internet for all charon nodes to communicate with one another optimally.
+
+The following are guides to getting started with our template repositories. The intention is to support every combination of beacon clients and validator clients with compose files.
+
+- [Running the full cluster alone.](./quickstart-alone.md)
+- [Running one node in a cluster with a group of other node operators.](./quickstart-group.md)
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.7.0/int/quickstart/quickstart-alone.md b/docs/versioned_docs/version-v0.7.0/int/quickstart/quickstart-alone.md
new file mode 100644
index 0000000000..4914a8745c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/quickstart/quickstart-alone.md
@@ -0,0 +1,56 @@
+---
+sidebar_position: 4
+description: Run all nodes in a distributed validator cluster
+---
+
+# Run a cluster alone
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) template repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Create the artifacts needed to run a testnet distributed validator cluster
+
+ ```sh
+ # Create a testnet distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.7.0 create cluster --cluster-dir=".charon" --withdrawal-address="0x000000000000000000000000000000000000dead"
+ ```
+4. Start the cluster
+
+ ```sh
+ # Start the distributed validator cluster
+ docker-compose up
+ ```
+5. Checkout the monitoring dashboard and see if things look all right
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+6. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+
+Congratulations, if this all worked you are now running a distributed validator cluster on a testnet. Try turning off a single node of the four with `docker stop` and see if the validator stays online or begins missing duties, to see for yourself the fault-tolerance that can be added to proof of stake validation with this new Distributed Validator Technology.
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.7.0/int/quickstart/quickstart-group.md b/docs/versioned_docs/version-v0.7.0/int/quickstart/quickstart-group.md
new file mode 100644
index 0000000000..f63de0bb01
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/quickstart/quickstart-group.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 5
+description: Run one node in a multi-operator distributed validator cluster
+---
+
+# Run a cluster with others
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+To create a distributed validator cluster with a group of other node operators requires five key steps:
+
+* Every operator prepares their software and gets their charon client's [ENR](../faq.md#what-is-an-enr)
+* One operator prepares the terms of the distributed validator key generation ceremony
+ * They select the network, the withdrawal address, the number of 32 ether distributed validators to create, and the ENRs of each operator taking part in the ceremony.
+ * In future, the DV launchpad will facilitate this process more seamlessly, with consent on the terms provided by all operators that participate.
+* Every operator participates in the DKG ceremony, and once successful, a number of cluster artifacts are created, including:
+ * The private key shares for each distributed validator
+ * The deposit data file containing deposit details for each distributed validator
+ * A `cluster-lock.json` file which contains the finalised terms of this cluster required by charon to operate.
+* Every operator starts their node with `charon run`, and uses their monitoring to determine the cluster health and connectivity
+* Once the cluster is confirmed to be healthy, deposit data files created during this process are activated on the [staking launchpad](https://launchpad.ethereum.org/).
+
+## Getting started with Charon
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) template repository from Github, and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+ # Change directory
+ cd charon-distributed-validator-node/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Now create a private key for charon to use for its ENR
+
+ ```sh
+ # Create an ENR private key
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.7.0 create enr
+ ```
+
+ :::warning The ability to replace a deleted or compromised private key is limited at this point. Please make a secure backup of this private key if this distributed validator is important to you.\
+ ::: This command will print your charon client's ENR to the command line. It should look something like:
+
+ ```
+ enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+ ```
+
+ This record identifies your charon client no matter where it communicates from across the internet. It is required for the following step of creating a set of distributed validator private key shares amongst the cluster operators.
+
+## Performing a Distributed Validator Key Generation Ceremony
+
+To create the private keys for a distributed validator securely, a Distributed Key Generation (DKG) process must take place.
+
+1. After gathering each operators ENR and setting them in the `.env` file, one operator should prepare the ceremony with `charon create dkg`
+
+ ```sh
+
+ # First set the ENRs of all the operators participating in DKG ceremony in .env file as CHARON_OPERATOR_ENRS
+
+ # Create .charon/cluster-definition.json to participate in DKG ceremony
+ docker run --rm -v "$(pwd):/opt/charon" --env-file .env ghcr.io/obolnetwork/charon:v0.7.0 create dkg
+ ```
+2. The operator that ran this command should distribute the resulting `cluster-definition.json` file to each operator.
+3. At a pre-agreed time, all operators run the ceremony program with the `charon dkg` command
+
+ ```sh
+ # Copy the cluster-definition.json file to .charon
+ cp cluster-definition.json .charon/
+
+ # Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys/
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.7.0 dkg
+ ```
+
+## Verifying cluster health
+
+Once the key generation ceremony has been completed, the charon nodes have the data they need to come together to form a cluster.
+
+1. Start your distributed validator node with docker-compose
+
+ ```sh
+ # Run a charon client, a vc client, and prom+grafana clients as containers
+ docker-compose up
+ ```
+2. Use the pre-prepared [grafana](http://localhost:3000/) dashboard to verify the cluster health looks okay. You should see connections with all other operators in the cluster as healthy, and observed ping times under 1 second for all connections.
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+
+## Activating the distributed validator
+
+Once the cluster is healthy and fully connected, it is time to deposit the required 32 (test) ether to activate the newly created Distributed Validator.
+
+1. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+ * A more distributed validator friendly deposit interface is in the works for an upcoming release.
+2. This process takes approximately 16 hours for the deposit to be registered on the beacon chain. Future upgrades to the protocol aims to reduce this time.
+3. Once the validator deposit is recognised on the beacon chain, the validator is assigned an index, and the wait for activation begins.
+4. Finally, once the validator is activated, it should be monitored for to ensure it is achieving an inclusion distance of near 0, to ensure optimal rewards. You should also tweet the link to your newly activated validator with the hashtag [#RunDVT](https://twitter.com/search?q=%2523RunDVT) 🙃
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.7.0/int/working-groups.md b/docs/versioned_docs/version-v0.7.0/int/working-groups.md
new file mode 100644
index 0000000000..0302cd633a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 5
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 70%
+- Phase 1: 65%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.7.0/intro.md b/docs/versioned_docs/version-v0.7.0/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.7.0/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.7.0/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..92e96695e1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://ethresear.ch/t/0x03-withdrawal-credentials-simple-eth1-triggerable-withdrawals/10021) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.7.0/sc/README.md b/docs/versioned_docs/version-v0.7.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.7.0/testnet.md b/docs/versioned_docs/version-v0.7.0/testnet.md
new file mode 100644
index 0000000000..5e26907547
--- /dev/null
+++ b/docs/versioned_docs/version-v0.7.0/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs will be coordinating and hosting a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [x] Dev Net 1
+* [x] Dev Net 2
+* [ ] Athena Public Testnet 1
+* [ ] Bia Attack net
+* [ ] Circe Public Testnet 2
+* [ ] Demeter Red/Blue net
+
+### Devnet 1
+
+The first devnet aim will be to have a number of trusted operators test out our earliest tutorial flows. The aim will be for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with the option of adding a single consensus layer client from a weak subjectivity checkpoint (the default will be to connect to our Kiln RPC server, we shouldn't get too much load for this phase). The keys will be created locally in charon, and activated with the existing launchpad or ethdo.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** May 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a newtwork.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aim will be to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim will be for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators will need to expose charon to the public internet on a static IP address.
+
+This devnet will also be the first time `charon dkg` is tested with users. The launchpad is not anticipated to be complete, and this dkg will be triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet will be to collect network performance data. This will be the first time we will have charon run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet will be a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Target start date:** June 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+This testnet would be intended to include the Distributed Validator Launchpad.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** July 2022
+
+**Duration:** 2 week setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** August 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** September 2022 ([Dappcon](https://www.dappcon.io/) runs 12th-14th of Sept. )
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** October 2022 ([Devcon 6](https://devcon.org/en/#road-to-devcon) runs 7th-16th of October. )
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v0.8.0/README.md b/docs/versioned_docs/version-v0.8.0/README.md
new file mode 100644
index 0000000000..d0729dda20
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.8.0
+
diff --git a/docs/versioned_docs/version-v0.8.0/cg/README.md b/docs/versioned_docs/version-v0.8.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.8.0/cg/bug-report.md b/docs/versioned_docs/version-v0.8.0/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.8.0/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.8.0/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..aaba2119c8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dv/01_introducing-charon.md
@@ -0,0 +1,29 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.8.0/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run ghcr.io/obolnetwork/charon:v0.8.0 --help
+```
+
+For more information on running charon, take a look at our [quickstart guide](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.8.0/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.8.0/dv/02_validator-creation.md
new file mode 100644
index 0000000000..f13437b26d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dv/02_validator-creation.md
@@ -0,0 +1,31 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a _cluster configuration_.
+ * This operator also sets their charon client's Ethereum Node Record ([ENR](../int/faq.md#what-is-an-enr)).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a shareable URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * They submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms.
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster definition file.
+4. Every operator passes this cluster definition file to the `charon dkg` command. The definition provides the charon process with the information it needs to find and complete the DKG ceremony with the other charon clients involved.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A `cluster-lock.json` file, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated public key shares. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares, their ENR private key if they have not yet done so, and the `cluster-lock.json` file.
+7. All operators load the keys and cluster lockfiles generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.8.0/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.8.0/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..eddc58cf9e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dv/04_middleware-daemon.md
@@ -0,0 +1,15 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware Architecture
+
+
+
+The Charon daemon sits as a middleware between the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/) and any downstream validator clients.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
diff --git a/docs/versioned_docs/version-v0.8.0/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.8.0/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..9ea67f7faf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dv/06_peer-discovery.md
@@ -0,0 +1,37 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security and sybil-resistance, charon clients need to be able to authenticate one another. We achieve this by giving each charon client a public/private key pair that they can sign with such that other clients in the cluster will be able to recognise them as legitimate no matter which IP address they communicate from.
+
+At the end of a [DKG ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+### Authenticating a distributed validator client
+
+Before a DKG process begins, all operators must run `charon create enr`, or just `charon enr`, to create or get the Ethereum Node Record for their client. These ENRs are included in the configuration of a Distributed Key Generation ceremony.
+
+The file that outlines a DKG ceremony is known as a [`cluster-definition`](./08_distributed-validator-cluster-manifest.md) file. This file is passed to `charon dkg` which uses it to create private keys, a cluster lock file and deposit data for the configured number of distributed validators. The cluster-lock file will be made available to `charon run`, and the validator key stores will be made available to the configured validator client.
+
+When `charon run` starts up and ingests its configuration from the `cluster-lock.json` file, it checks if its observed/configured public IP address differs from what is listed in the lock file. If it is different; it updates the IP address, increments the nonce of the ENR and reissues it before beginning to establish connections with the other operators in the cluster.
+
+#### Node database
+
+Distributed Validator Clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely. This is acceptable as the number of operators in a cluster is expected to stay constant. Mutable cluster operators will be introduced in future.
+
+#### Node discovery
+
+At boot, a charon client will ingest it's configured `cluster-lock.json` file. This file contains a list of ENRs of the client's peers. The client will attempt to establish a connection with these peers, and will perform a handshake if they connect to establish an end to end encrypted communication channel between the clients.
+
+However, the IP addresses within an ENR can become stale. This could result in a cluster not being able to establish a connection with all nodes. To be tolerant to operator IP addresses changing, charon also supports the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) discovery protocol. This allows a charon client to find another operator that might have moved IP address, but still retains the same ENR private key.
+
+
diff --git a/docs/versioned_docs/version-v0.8.0/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.8.0/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..50de00d79a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dv/07_p2p-interface.md
@@ -0,0 +1,13 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Identity: [Ethereum Node Records](https://eips.ethereum.org/EIPS/eip-778).
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Each charon client must have their ENR public key authorized in a [cluster-lock.json](./08_distributed-validator-cluster-manifest.md) file in order for the client handshake to succeed.
+- Discovery: [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md).
diff --git a/docs/versioned_docs/version-v0.8.0/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.8.0/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..9c2b959a44
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,65 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Cluster Configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client (or cluster) locally or in production.
+
+## Cluster Configuration Files
+
+A charon cluster is configured in two steps:
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+The `charon create dkg` command is used to create `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The `charon create cluster` command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+The schema of the `cluster-definition.json` is defined as:
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "nonce": 1, // Nonce (incremented each time the ENR is added/signed)
+ "config_signature": "123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.0.0", // Schema version
+ "num_validators": 100, // Number of distributed validators to be created in cluster.lock
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "fee_recipient_address":"0x123..abfc", // ETH1 fee_recipient address
+ "withdrawal_address": "0x123..abfc", // ETH1 withdrawal address
+ "dkg_algorithm": "foo_dkg_v1" , // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "abcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "abcdef...abcedef" // Final Hash of all fields
+}
+```
+
+The above `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+The `cluster-lock.json` has the following schema:
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equaled to num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "oA8Z...2XyT", "g1q...icu"], // Public Key Shares
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
diff --git a/docs/versioned_docs/version-v0.8.0/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.8.0/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..cb9177ef92
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dv/09_charon_cli_reference.md
@@ -0,0 +1,203 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.8.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.8.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+### Available Commands
+
+The following are the top-level commands available to use.
+
+```markdown
+charon help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Start a discv5 bootnode server
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print this client's Ethereum Node Record
+ help Help about any command
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+### `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+
+```
+
+#### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+```
+
+#### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster.lock` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
+ -h, --help Help for cluster
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Ethereum address to receive the returned stake and accrued rewards. (default "0x0000000000000000000000000000000000000000")
+```
+
+#### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, keycast, frost (default "default")
+ --fee-recipient-address string Optional Ethereum address of the fee recipient
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings Comma-separated list of each operator's Charon ENR address
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Withdrawal Ethereum address (default "0x0000000000000000000000000000000000000000")
+```
+
+### Performing a DKG Ceremony
+
+Th `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit and exit data for each new distributed validator. The command outputs the `cluster.lock` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+```
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster.lock` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL (default "http://localhost/")
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing
+ --jaeger-service string Service name used for jaeger tracing (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster (default ".charon/cluster-lock.json")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof) (default "127.0.0.1:3620")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API (default "127.0.0.1:3600")
+```
diff --git a/docs/versioned_docs/version-v0.8.0/dv/README.md b/docs/versioned_docs/version-v0.8.0/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.8.0/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.8.0/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..49e6557706
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,121 @@
+---
+Description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+---
+
+# Distributed Validator Key Generation
+
+## Contents
+
+- [Overview](#overview)
+- [Actors involved](#actors-involved)
+- [Cluster Definition creation](#cluster-definition-creation)
+- [Carrying out the DKG ceremony](#carrying-out-the-dkg-ceremony)
+- [Backing up ceremony artifacts](#backing-up-the-ceremony-artifacts)
+- [Preparing for validator activation](#preparing-for-validator-activation)
+- [DKG verification](#dkg-verification)
+- [Appendix](#appendix)
+
+## Overview
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together. (4 randomly chosen points on a graph don't all neccessarily sit on the same order three curve.) To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a distributed key generation ceremony.
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://docs.obol.tech/docs/dv/distributed-validator-cluster-manifest).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign with this address's private key to authenticate their charon client ahead of the ceremony. The signature will be of; a hash of the charon clients ENR public key, the `cluster_definition_hash`, and an incrementing `nonce`, allowing for a direct linkage between a user, their charon client, and the cluster this client is intended to service, while retaining the ability to update the charon client by incrementing the nonce value and re-signing like the standard ENR spec.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p-noise). These keys need to be created by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This definition file is created with the help of the [Distributed Validator Launchpad](https://docs.obol.tech/docs/dvk/distributed_validator_launchpad). The creation process involves a number of steps.
+
+- A `leader` Operator, that wishes to co-ordinate the creation of a new Distributed Validator Cluster navigates to the launch pad and selects "Create new Cluster"
+- The `leader` uses the user interface to configure all of the important details about the cluster including:
+ - The `withdrawal address` for the created validators
+ - The `feeRecipient` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required (if not choosing the safe default)
+ - The network (fork_version/chainId) that this cluster will validate on
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialised and merklised to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that their is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the leader is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralised backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralisation of the launchpad.)
+- The leader will then share the URL to this ceremony with their intended participants.
+- Anyone that clicks the ceremony url, or inputs the `cluster_definition_hash` when prompted on the landing page will be brought to the ceremony status page. (After completing all disclaimers and advisories)
+- A "Connect Wallet" button will be visible beneath the ceremony status container, a participant can click on it to connect their wallet to the site
+ - If the participant connects a wallet that is not in the participant list, the button disables, as there is nothing to do
+ - If the participant connects a wallet that is in the participant list, they get prompted to input the ENR of their charon node.
+ - If the ENR field is populated and validated the participant can now see a "Confirm Cluster Configuration" button. This button triggers one/two signatures.
+ - The participant signs the `cluster_definition_hash`, to prove they are consensting to this exact configuration.
+ - The participant signs their charon node's ENR, to authenticate and authorise that specific charon node to participate on their behalf in the distributed validator cluster.
+ - These/this signature is sent to the data availability layer, where it verifies the signatures are correct for the given participants ethereum address. If the signatures pass validation, the signature of the definition hash and the the ENR + signature get saved to the definition object.
+- All participants in the list must sign the definition hash and submit a signed ENR before a DKG ceremony can begin. The outstanding signatures can be easily displayed on the status page.
+- Finally, once all participants have signed their approval, and submitted a charon node ENR to act on their behalf, the definition data can be downloaded as a file if the users click a newly displayed button, `Download Manifest`.
+- At this point each participant must load this definition into their charon client, and the client will attempt to complete the DKG.
+
+## Carrying out the DKG ceremony
+
+Once participant has their definition file prepared, they will pass the file to charon's `dkg` command. Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to bootnodes that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen be a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+```sh
+# Common data
+.charon/cluster-definition.json # The original definition file from the DV Launchpad or `charon create dkg`
+.charon/cluster-lock.json # New lockfile based on cluster-definition.json with validator group public keys and threshold BLS verifiers included with the initial cluster config
+.charon/deposit-data.json # JSON file of deposit data for the distributed validators
+
+# Sensitive operator-specific data
+.charon/charon-enr-private-key # Created before the ceremony took place [Back this up]
+.charon/validator_keys/ # Folder of key shares to be backed up and moved to validator client [Back this up]
+```
+
+## Backing up the ceremony artifacts
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favour of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## Preparing for validator activation
+
+Once the ceremony is complete, and secure backups of key shares have been made by each operator. They must now load these key shares into their validator clients, and run the `charon run` command to turn it into operational mode.
+
+All operators should confirm that their charon client logs indicate all nodes are online and connected. They should also verify the readiness of their beacon clients and validator clients. Charon's grafana dashboard is a good way to see the readiness of the full cluster from its perspective.
+
+Once all operators are satisfied with network connectivity, one member can use the Obol Distributed Validator deposit flow to send the required ether and deposit data to the deposit contract, beginning the process of a distributed validator activation. Good luck.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it doe not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Using DKG without the launchpad
+
+Charon clients can do a DKG with a definition file that does not contain operator signatures if you pass a `--no-verify` flag to `charon dkg`. This can be used for testing purposes when strict signature verification is not of the utmost importance.
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../dv/08_distributed-validator-cluster-manifest.md#cluster-configuration-files).
+
diff --git a/docs/versioned_docs/version-v0.8.0/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.8.0/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..5a613df7df
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,15 @@
+---
+Description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Distributed Validator launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.8.0/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.8.0/dvk/README.md b/docs/versioned_docs/version-v0.8.0/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.8.0/fr/README.md b/docs/versioned_docs/version-v0.8.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.8.0/fr/eth.md b/docs/versioned_docs/version-v0.8.0/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.8.0/fr/golang.md b/docs/versioned_docs/version-v0.8.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.8.0/glossary.md b/docs/versioned_docs/version-v0.8.0/glossary.md
new file mode 100644
index 0000000000..53bb274c27
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/glossary.md
@@ -0,0 +1,8 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.8.0/int/README.md b/docs/versioned_docs/version-v0.8.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.8.0/int/faq.md b/docs/versioned_docs/version-v0.8.0/int/faq.md
new file mode 100644
index 0000000000..85304fa80c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/faq.md
@@ -0,0 +1,39 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+You can split an existing EIP-2335 keystore for a validator to migrate it to a distributed validator architecture with the `charon create cluster --split-existing-keys` command documented [here](../dv/09_charon_cli_reference.md#create-a-full-cluster-locally).
+
+### What is an ENR?
+
+An ENR is shorthand for an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778). It is a way to represent a node on a public network, with a reliable mechanism to update its information. At Obol we use ENRs to identify charon nodes to one another such that they can form clusters with the right charon nodes and not impostors.
+
+ENRs have private keys they use to sign updates to the [data contained](https://enr-viewer.com/) in their ENR. This private key is by default found at `.charon/charon-enr-private-key`, and should be kept secure, and not checked into version control. An ENR looks something like this:
+```
+enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+```
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
+
+### What's with the name Charon?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.8.0/int/key-concepts.md b/docs/versioned_docs/version-v0.8.0/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.8.0/int/overview.md b/docs/versioned_docs/version-v0.8.0/int/overview.md
new file mode 100644
index 0000000000..8e3fefcbcf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling consensus by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.8.0/int/quickstart/README.md b/docs/versioned_docs/version-v0.8.0/int/quickstart/README.md
new file mode 100644
index 0000000000..bd2483c7cf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/quickstart/README.md
@@ -0,0 +1,2 @@
+# quickstart
+
diff --git a/docs/versioned_docs/version-v0.8.0/int/quickstart/index.md b/docs/versioned_docs/version-v0.8.0/int/quickstart/index.md
new file mode 100644
index 0000000000..bc22286f06
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/quickstart/index.md
@@ -0,0 +1,12 @@
+# Quickstart Guides
+
+:::warning
+Charon is in an early alpha state and is not ready to be run on mainnet
+:::
+
+There are two ways to test out a distributed validator; on your own, by running all of the required software as containers within docker, or you can run a distributed validator with a group of other node operators, where you each run only one validator client and charon client, and the charon clients communicate with one another over the public internet to operate the distributed validator. The second manner requires each operator to open a port on the internet for all charon nodes to communicate with one another optimally.
+
+The following are guides to getting started with our template repositories. The intention is to support every combination of beacon clients and validator clients with compose files.
+
+- [Running the full cluster alone.](./quickstart-alone.md)
+- [Running one node in a cluster with a group of other node operators.](./quickstart-group.md)
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.8.0/int/quickstart/quickstart-alone.md b/docs/versioned_docs/version-v0.8.0/int/quickstart/quickstart-alone.md
new file mode 100644
index 0000000000..5906bedc06
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/quickstart/quickstart-alone.md
@@ -0,0 +1,56 @@
+---
+sidebar_position: 4
+description: Run all nodes in a distributed validator cluster
+---
+
+# Run a cluster alone
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) template repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL, make sure Prater is selected in dropdown of ENDPOINTS:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Create the artifacts needed to run a testnet distributed validator cluster
+
+ ```sh
+ # Create a testnet distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.8.0 create cluster --withdrawal-address="0x000000000000000000000000000000000000dead"
+ ```
+4. Start the cluster
+
+ ```sh
+ # Start the distributed validator cluster
+ docker-compose up
+ ```
+5. Checkout the monitoring dashboard and see if things look all right
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+6. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/cluster/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+
+Congratulations, if this all worked you are now running a distributed validator cluster on a testnet. Try turning off a single node of the four with `docker stop` and see if the validator stays online or begins missing duties, to see for yourself the fault-tolerance that can be added to proof of stake validation with this new Distributed Validator Technology.
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.8.0/int/quickstart/quickstart-group.md b/docs/versioned_docs/version-v0.8.0/int/quickstart/quickstart-group.md
new file mode 100644
index 0000000000..e41a77882e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/quickstart/quickstart-group.md
@@ -0,0 +1,116 @@
+---
+sidebar_position: 5
+description: Run one node in a multi-operator distributed validator cluster
+---
+
+# Run a cluster with others
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+To create a distributed validator cluster with a group of other node operators requires five key steps:
+
+* Every operator prepares their software and gets their charon client's [ENR](../faq.md#what-is-an-enr)
+* One operator prepares the terms of the distributed validator key generation ceremony
+ * They select the network, the withdrawal address, the number of 32 ether distributed validators to create, and the ENRs of each operator taking part in the ceremony.
+ * In future, the DV launchpad will facilitate this process more seamlessly, with consent on the terms provided by all operators that participate.
+* Every operator participates in the DKG ceremony, and once successful, a number of cluster artifacts are created, including:
+ * The private key shares for each distributed validator
+ * The deposit data file containing deposit details for each distributed validator
+ * A `cluster-lock.json` file which contains the finalised terms of this cluster required by charon to operate.
+* Every operator starts their node with `charon run`, and uses their monitoring to determine the cluster health and connectivity
+* Once the cluster is confirmed to be healthy, deposit data files created during this process are activated on the [staking launchpad](https://launchpad.ethereum.org/).
+
+## Getting started with Charon
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) template repository from Github, and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+ # Change directory
+ cd charon-distributed-validator-node/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Now create a private key for charon to use for its ENR
+
+ ```sh
+ # Create an ENR private key
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.8.0 create enr
+ ```
+
+ :::warning The ability to replace a deleted or compromised private key is limited at this point. Please make a secure backup of this private key if this distributed validator is important to you.\
+ ::: This command will print your charon client's ENR to the command line. It should look something like:
+
+ ```
+ enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+ ```
+
+ This record identifies your charon client no matter where it communicates from across the internet. It is required for the following step of creating a set of distributed validator private key shares amongst the cluster operators.
+
+## Performing a Distributed Validator Key Generation Ceremony
+
+To create the private keys for a distributed validator securely, a Distributed Key Generation (DKG) process must take place.
+
+1. After gathering each operators ENR and setting them in the `.env` file, one operator should prepare the ceremony with `charon create dkg`
+
+ ```sh
+
+ # First set the ENRs of all the operators participating in DKG ceremony in .env file as CHARON_OPERATOR_ENRS
+
+ # Create .charon/cluster-definition.json to participate in DKG ceremony
+ docker run --rm -v "$(pwd):/opt/charon" --env-file .env ghcr.io/obolnetwork/charon:v0.8.0 create dkg
+ ```
+2. The operator that ran this command should distribute the resulting `cluster-definition.json` file to each operator.
+3. At a pre-agreed time, all operators run the ceremony program with the `charon dkg` command
+
+ ```sh
+ # Copy the cluster-definition.json file to .charon
+ cp cluster-definition.json .charon/
+
+ # Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys/
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.8.0 dkg
+ ```
+
+## Verifying cluster health
+
+Once the key generation ceremony has been completed, the charon nodes have the data they need to come together to form a cluster.
+
+1. Start your distributed validator node with docker-compose
+
+ ```sh
+ # Run a charon client, a vc client, and prom+grafana clients as containers
+ docker-compose up
+ ```
+2. Use the pre-prepared [grafana](http://localhost:3000/) dashboard to verify the cluster health looks okay. You should see connections with all other operators in the cluster as healthy, and observed ping times under 1 second for all connections.
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/singlenode
+ ```
+
+## Activating the distributed validator
+
+Once the cluster is healthy and fully connected, it is time to deposit the required 32 (test) ether to activate the newly created Distributed Validator.
+
+1. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+ * A more distributed validator friendly deposit interface is in the works for an upcoming release.
+2. This process takes approximately 16 hours for the deposit to be registered on the beacon chain. Future upgrades to the protocol aims to reduce this time.
+3. Once the validator deposit is recognised on the beacon chain, the validator is assigned an index, and the wait for activation begins.
+4. Finally, once the validator is activated, it should be monitored for to ensure it is achieving an inclusion distance of near 0, to ensure optimal rewards. You should also tweet the link to your newly activated validator with the hashtag [#RunDVT](https://twitter.com/search?q=%2523RunDVT) 🙃
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.8.0/int/working-groups.md b/docs/versioned_docs/version-v0.8.0/int/working-groups.md
new file mode 100644
index 0000000000..0302cd633a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 5
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 70%
+- Phase 1: 65%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.8.0/intro.md b/docs/versioned_docs/version-v0.8.0/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.8.0/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.8.0/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..92e96695e1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://ethresear.ch/t/0x03-withdrawal-credentials-simple-eth1-triggerable-withdrawals/10021) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.8.0/sc/README.md b/docs/versioned_docs/version-v0.8.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.8.0/testnet.md b/docs/versioned_docs/version-v0.8.0/testnet.md
new file mode 100644
index 0000000000..1f08fd224f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.0/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs have and will continue to coordinate and host a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [x] [Dev Net 1](testnet.md#devnet-1)
+* [x] [Dev Net 2](testnet.md#devnet-2)
+* [ ] [Athena Public Testnet 1](testnet.md#athena-public-testnet-1)
+* [ ] [Bia Attack net](testnet.md#bia-attack-net)
+* [ ] [Circe Public Testnet 2](testnet.md#cerce-public-testnet-ii)
+* [ ] [Demeter Red/Blue net](testnet.md#demeter-redblue-net)
+
+### Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with a remote consensus client. The keys were created locally in charon, and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a network.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators avoided exposing charon to the public internet on a static IP address through the use of Obol hosted relay nodes.
+
+This devnet was also the first time `charon dkg` was tested with users. The launchpad was not used, and this dkg was triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet was to collect network performance data. This was the first time charon was run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet was a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Target start date:** August 2022
+
+**Duration:** 2 week cluster setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+**Registration Form:** [Here](https://obol.typeform.com/AthenaTestnet)
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** September 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** Q4 2022
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** Q4 2022
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v0.8.1/README.md b/docs/versioned_docs/version-v0.8.1/README.md
new file mode 100644
index 0000000000..8deea1a6d0
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/README.md
@@ -0,0 +1,2 @@
+# version-v0.8.1
+
diff --git a/docs/versioned_docs/version-v0.8.1/cg/README.md b/docs/versioned_docs/version-v0.8.1/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.8.1/cg/bug-report.md b/docs/versioned_docs/version-v0.8.1/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.8.1/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.8.1/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..d82d3835b7
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dv/01_introducing-charon.md
@@ -0,0 +1,29 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.8.1/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run ghcr.io/obolnetwork/charon:v0.8.1 --help
+```
+
+For more information on running charon, take a look at our [quickstart guide](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.8.1/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.8.1/dv/02_validator-creation.md
new file mode 100644
index 0000000000..f13437b26d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dv/02_validator-creation.md
@@ -0,0 +1,31 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a _cluster configuration_.
+ * This operator also sets their charon client's Ethereum Node Record ([ENR](../int/faq.md#what-is-an-enr)).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a shareable URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * They submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms.
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster definition file.
+4. Every operator passes this cluster definition file to the `charon dkg` command. The definition provides the charon process with the information it needs to find and complete the DKG ceremony with the other charon clients involved.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A `cluster-lock.json` file, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated public key shares. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares, their ENR private key if they have not yet done so, and the `cluster-lock.json` file.
+7. All operators load the keys and cluster lockfiles generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.8.1/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.8.1/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..eddc58cf9e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dv/04_middleware-daemon.md
@@ -0,0 +1,15 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware Architecture
+
+
+
+The Charon daemon sits as a middleware between the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/) and any downstream validator clients.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
diff --git a/docs/versioned_docs/version-v0.8.1/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.8.1/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..9ea67f7faf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dv/06_peer-discovery.md
@@ -0,0 +1,37 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security and sybil-resistance, charon clients need to be able to authenticate one another. We achieve this by giving each charon client a public/private key pair that they can sign with such that other clients in the cluster will be able to recognise them as legitimate no matter which IP address they communicate from.
+
+At the end of a [DKG ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+### Authenticating a distributed validator client
+
+Before a DKG process begins, all operators must run `charon create enr`, or just `charon enr`, to create or get the Ethereum Node Record for their client. These ENRs are included in the configuration of a Distributed Key Generation ceremony.
+
+The file that outlines a DKG ceremony is known as a [`cluster-definition`](./08_distributed-validator-cluster-manifest.md) file. This file is passed to `charon dkg` which uses it to create private keys, a cluster lock file and deposit data for the configured number of distributed validators. The cluster-lock file will be made available to `charon run`, and the validator key stores will be made available to the configured validator client.
+
+When `charon run` starts up and ingests its configuration from the `cluster-lock.json` file, it checks if its observed/configured public IP address differs from what is listed in the lock file. If it is different; it updates the IP address, increments the nonce of the ENR and reissues it before beginning to establish connections with the other operators in the cluster.
+
+#### Node database
+
+Distributed Validator Clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely. This is acceptable as the number of operators in a cluster is expected to stay constant. Mutable cluster operators will be introduced in future.
+
+#### Node discovery
+
+At boot, a charon client will ingest it's configured `cluster-lock.json` file. This file contains a list of ENRs of the client's peers. The client will attempt to establish a connection with these peers, and will perform a handshake if they connect to establish an end to end encrypted communication channel between the clients.
+
+However, the IP addresses within an ENR can become stale. This could result in a cluster not being able to establish a connection with all nodes. To be tolerant to operator IP addresses changing, charon also supports the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) discovery protocol. This allows a charon client to find another operator that might have moved IP address, but still retains the same ENR private key.
+
+
diff --git a/docs/versioned_docs/version-v0.8.1/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.8.1/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..50de00d79a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dv/07_p2p-interface.md
@@ -0,0 +1,13 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Identity: [Ethereum Node Records](https://eips.ethereum.org/EIPS/eip-778).
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Each charon client must have their ENR public key authorized in a [cluster-lock.json](./08_distributed-validator-cluster-manifest.md) file in order for the client handshake to succeed.
+- Discovery: [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md).
diff --git a/docs/versioned_docs/version-v0.8.1/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.8.1/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..9c2b959a44
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,65 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Cluster Configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client (or cluster) locally or in production.
+
+## Cluster Configuration Files
+
+A charon cluster is configured in two steps:
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+The `charon create dkg` command is used to create `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The `charon create cluster` command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+The schema of the `cluster-definition.json` is defined as:
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "nonce": 1, // Nonce (incremented each time the ENR is added/signed)
+ "config_signature": "123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.0.0", // Schema version
+ "num_validators": 100, // Number of distributed validators to be created in cluster.lock
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "fee_recipient_address":"0x123..abfc", // ETH1 fee_recipient address
+ "withdrawal_address": "0x123..abfc", // ETH1 withdrawal address
+ "dkg_algorithm": "foo_dkg_v1" , // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "abcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "abcdef...abcedef" // Final Hash of all fields
+}
+```
+
+The above `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+The `cluster-lock.json` has the following schema:
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equaled to num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "oA8Z...2XyT", "g1q...icu"], // Public Key Shares
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
diff --git a/docs/versioned_docs/version-v0.8.1/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.8.1/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..b9bcaee44b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dv/09_charon_cli_reference.md
@@ -0,0 +1,203 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.8.1`](https://github.com/ObolNetwork/charon/releases/tag/v0.8.1). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+### Available Commands
+
+The following are the top-level commands available to use.
+
+```markdown
+charon help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Start a discv5 bootnode server
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print this client's Ethereum Node Record
+ help Help about any command
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+### `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+
+```
+
+#### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+```
+
+#### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster.lock` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
+ -h, --help Help for cluster
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Ethereum address to receive the returned stake and accrued rewards. (default "0x0000000000000000000000000000000000000000")
+```
+
+#### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, keycast, frost (default "default")
+ --fee-recipient-address string Optional Ethereum address of the fee recipient
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings Comma-separated list of each operator's Charon ENR address
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Withdrawal Ethereum address (default "0x0000000000000000000000000000000000000000")
+```
+
+### Performing a DKG Ceremony
+
+Th `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit and exit data for each new distributed validator. The command outputs the `cluster.lock` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+```
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster.lock` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL (default "http://localhost/")
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing
+ --jaeger-service string Service name used for jaeger tracing (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster (default ".charon/cluster-lock.json")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof) (default "127.0.0.1:3620")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API (default "127.0.0.1:3600")
+```
diff --git a/docs/versioned_docs/version-v0.8.1/dv/README.md b/docs/versioned_docs/version-v0.8.1/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.8.1/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.8.1/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..49e6557706
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,121 @@
+---
+Description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+---
+
+# Distributed Validator Key Generation
+
+## Contents
+
+- [Overview](#overview)
+- [Actors involved](#actors-involved)
+- [Cluster Definition creation](#cluster-definition-creation)
+- [Carrying out the DKG ceremony](#carrying-out-the-dkg-ceremony)
+- [Backing up ceremony artifacts](#backing-up-the-ceremony-artifacts)
+- [Preparing for validator activation](#preparing-for-validator-activation)
+- [DKG verification](#dkg-verification)
+- [Appendix](#appendix)
+
+## Overview
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together. (4 randomly chosen points on a graph don't all neccessarily sit on the same order three curve.) To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a distributed key generation ceremony.
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://docs.obol.tech/docs/dv/distributed-validator-cluster-manifest).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign with this address's private key to authenticate their charon client ahead of the ceremony. The signature will be of; a hash of the charon clients ENR public key, the `cluster_definition_hash`, and an incrementing `nonce`, allowing for a direct linkage between a user, their charon client, and the cluster this client is intended to service, while retaining the ability to update the charon client by incrementing the nonce value and re-signing like the standard ENR spec.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p-noise). These keys need to be created by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This definition file is created with the help of the [Distributed Validator Launchpad](https://docs.obol.tech/docs/dvk/distributed_validator_launchpad). The creation process involves a number of steps.
+
+- A `leader` Operator, that wishes to co-ordinate the creation of a new Distributed Validator Cluster navigates to the launch pad and selects "Create new Cluster"
+- The `leader` uses the user interface to configure all of the important details about the cluster including:
+ - The `withdrawal address` for the created validators
+ - The `feeRecipient` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required (if not choosing the safe default)
+ - The network (fork_version/chainId) that this cluster will validate on
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialised and merklised to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that their is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the leader is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralised backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralisation of the launchpad.)
+- The leader will then share the URL to this ceremony with their intended participants.
+- Anyone that clicks the ceremony url, or inputs the `cluster_definition_hash` when prompted on the landing page will be brought to the ceremony status page. (After completing all disclaimers and advisories)
+- A "Connect Wallet" button will be visible beneath the ceremony status container, a participant can click on it to connect their wallet to the site
+ - If the participant connects a wallet that is not in the participant list, the button disables, as there is nothing to do
+ - If the participant connects a wallet that is in the participant list, they get prompted to input the ENR of their charon node.
+ - If the ENR field is populated and validated the participant can now see a "Confirm Cluster Configuration" button. This button triggers one/two signatures.
+ - The participant signs the `cluster_definition_hash`, to prove they are consensting to this exact configuration.
+ - The participant signs their charon node's ENR, to authenticate and authorise that specific charon node to participate on their behalf in the distributed validator cluster.
+ - These/this signature is sent to the data availability layer, where it verifies the signatures are correct for the given participants ethereum address. If the signatures pass validation, the signature of the definition hash and the the ENR + signature get saved to the definition object.
+- All participants in the list must sign the definition hash and submit a signed ENR before a DKG ceremony can begin. The outstanding signatures can be easily displayed on the status page.
+- Finally, once all participants have signed their approval, and submitted a charon node ENR to act on their behalf, the definition data can be downloaded as a file if the users click a newly displayed button, `Download Manifest`.
+- At this point each participant must load this definition into their charon client, and the client will attempt to complete the DKG.
+
+## Carrying out the DKG ceremony
+
+Once participant has their definition file prepared, they will pass the file to charon's `dkg` command. Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to bootnodes that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen be a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+```sh
+# Common data
+.charon/cluster-definition.json # The original definition file from the DV Launchpad or `charon create dkg`
+.charon/cluster-lock.json # New lockfile based on cluster-definition.json with validator group public keys and threshold BLS verifiers included with the initial cluster config
+.charon/deposit-data.json # JSON file of deposit data for the distributed validators
+
+# Sensitive operator-specific data
+.charon/charon-enr-private-key # Created before the ceremony took place [Back this up]
+.charon/validator_keys/ # Folder of key shares to be backed up and moved to validator client [Back this up]
+```
+
+## Backing up the ceremony artifacts
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favour of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## Preparing for validator activation
+
+Once the ceremony is complete, and secure backups of key shares have been made by each operator. They must now load these key shares into their validator clients, and run the `charon run` command to turn it into operational mode.
+
+All operators should confirm that their charon client logs indicate all nodes are online and connected. They should also verify the readiness of their beacon clients and validator clients. Charon's grafana dashboard is a good way to see the readiness of the full cluster from its perspective.
+
+Once all operators are satisfied with network connectivity, one member can use the Obol Distributed Validator deposit flow to send the required ether and deposit data to the deposit contract, beginning the process of a distributed validator activation. Good luck.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it doe not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Using DKG without the launchpad
+
+Charon clients can do a DKG with a definition file that does not contain operator signatures if you pass a `--no-verify` flag to `charon dkg`. This can be used for testing purposes when strict signature verification is not of the utmost importance.
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../dv/08_distributed-validator-cluster-manifest.md#cluster-configuration-files).
+
diff --git a/docs/versioned_docs/version-v0.8.1/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.8.1/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..65fc1b48bc
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,15 @@
+---
+Description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Distributed Validator launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.8.1/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.8.1/dvk/README.md b/docs/versioned_docs/version-v0.8.1/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.8.1/fr/README.md b/docs/versioned_docs/version-v0.8.1/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.8.1/fr/eth.md b/docs/versioned_docs/version-v0.8.1/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.8.1/fr/golang.md b/docs/versioned_docs/version-v0.8.1/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.8.1/glossary.md b/docs/versioned_docs/version-v0.8.1/glossary.md
new file mode 100644
index 0000000000..53bb274c27
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/glossary.md
@@ -0,0 +1,8 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.8.1/int/README.md b/docs/versioned_docs/version-v0.8.1/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.8.1/int/faq.md b/docs/versioned_docs/version-v0.8.1/int/faq.md
new file mode 100644
index 0000000000..85304fa80c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/faq.md
@@ -0,0 +1,39 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+You can split an existing EIP-2335 keystore for a validator to migrate it to a distributed validator architecture with the `charon create cluster --split-existing-keys` command documented [here](../dv/09_charon_cli_reference.md#create-a-full-cluster-locally).
+
+### What is an ENR?
+
+An ENR is shorthand for an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778). It is a way to represent a node on a public network, with a reliable mechanism to update its information. At Obol we use ENRs to identify charon nodes to one another such that they can form clusters with the right charon nodes and not impostors.
+
+ENRs have private keys they use to sign updates to the [data contained](https://enr-viewer.com/) in their ENR. This private key is by default found at `.charon/charon-enr-private-key`, and should be kept secure, and not checked into version control. An ENR looks something like this:
+```
+enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+```
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
+
+### What's with the name Charon?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.8.1/int/key-concepts.md b/docs/versioned_docs/version-v0.8.1/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.8.1/int/overview.md b/docs/versioned_docs/version-v0.8.1/int/overview.md
new file mode 100644
index 0000000000..8e3fefcbcf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling consensus by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.8.1/int/quickstart/README.md b/docs/versioned_docs/version-v0.8.1/int/quickstart/README.md
new file mode 100644
index 0000000000..bd2483c7cf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/quickstart/README.md
@@ -0,0 +1,2 @@
+# quickstart
+
diff --git a/docs/versioned_docs/version-v0.8.1/int/quickstart/index.md b/docs/versioned_docs/version-v0.8.1/int/quickstart/index.md
new file mode 100644
index 0000000000..bc22286f06
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/quickstart/index.md
@@ -0,0 +1,12 @@
+# Quickstart Guides
+
+:::warning
+Charon is in an early alpha state and is not ready to be run on mainnet
+:::
+
+There are two ways to test out a distributed validator; on your own, by running all of the required software as containers within docker, or you can run a distributed validator with a group of other node operators, where you each run only one validator client and charon client, and the charon clients communicate with one another over the public internet to operate the distributed validator. The second manner requires each operator to open a port on the internet for all charon nodes to communicate with one another optimally.
+
+The following are guides to getting started with our template repositories. The intention is to support every combination of beacon clients and validator clients with compose files.
+
+- [Running the full cluster alone.](./quickstart-alone.md)
+- [Running one node in a cluster with a group of other node operators.](./quickstart-group.md)
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.8.1/int/quickstart/quickstart-alone.md b/docs/versioned_docs/version-v0.8.1/int/quickstart/quickstart-alone.md
new file mode 100644
index 0000000000..dccf6f34b2
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/quickstart/quickstart-alone.md
@@ -0,0 +1,56 @@
+---
+sidebar_position: 4
+description: Run all nodes in a distributed validator cluster
+---
+
+# Run a cluster alone
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) template repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL, make sure Prater is selected in dropdown of ENDPOINTS:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Create the artifacts needed to run a testnet distributed validator cluster
+
+ ```sh
+ # Create a testnet distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.8.1 create cluster --withdrawal-address="0x000000000000000000000000000000000000dead"
+ ```
+4. Start the cluster
+
+ ```sh
+ # Start the distributed validator cluster
+ docker-compose up
+ ```
+5. Checkout the monitoring dashboard and see if things look all right
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+6. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/cluster/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+
+Congratulations, if this all worked you are now running a distributed validator cluster on a testnet. Try turning off a single node of the four with `docker stop` and see if the validator stays online or begins missing duties, to see for yourself the fault-tolerance that can be added to proof of stake validation with this new Distributed Validator Technology.
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.8.1/int/quickstart/quickstart-group.md b/docs/versioned_docs/version-v0.8.1/int/quickstart/quickstart-group.md
new file mode 100644
index 0000000000..6a793dc385
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/quickstart/quickstart-group.md
@@ -0,0 +1,125 @@
+---
+sidebar_position: 5
+description: Run one node in a multi-operator distributed validator cluster
+---
+
+# Run a cluster with others
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+To create a distributed validator cluster with a group of other node operators requires five key steps:
+
+* Every operator prepares their software and gets their charon client's [ENR](../faq.md#what-is-an-enr)
+* One operator prepares the terms of the distributed validator key generation ceremony
+ * They select the network, the withdrawal address, the number of 32 ether distributed validators to create, and the ENRs of each operator taking part in the ceremony.
+ * In future, the DV launchpad will facilitate this process more seamlessly, with consent on the terms provided by all operators that participate.
+* Every operator participates in the DKG ceremony, and once successful, a number of cluster artifacts are created, including:
+ * The private key shares for each distributed validator
+ * The deposit data file containing deposit details for each distributed validator
+ * A `cluster-lock.json` file which contains the finalised terms of this cluster required by charon to operate.
+* Every operator starts their node with `charon run`, and uses their monitoring to determine the cluster health and connectivity
+* Once the cluster is confirmed to be healthy, deposit data files created during this process are activated on the [staking launchpad](https://launchpad.ethereum.org/).
+
+## Getting started with Charon
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) template repository from Github, and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+ # Change directory
+ cd charon-distributed-validator-node/
+ ```
+2. Next create a private key for charon to use for its ENR
+
+ ```sh
+ # Create an ENR private key
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.8.1 create enr
+ ```
+
+ This command will print your charon client's ENR to the console. It should look something like:
+
+ ```
+ Created ENR private key: .charon/charon-enr-private-key
+ enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+ ```
+
+ :::warning The ability to replace a deleted or compromised private key is limited at this point. Please make a secure backup of this private key if this distributed validator is important to you.\
+ :::
+
+ This record identifies your charon client no matter where it communicates from across the internet. It is required for the following step of creating a set of distributed validator private key shares amongst the cluster operators.
+
+ Please make sure to make a backup of the private key at .charon/charon-enr-private-key. Be careful not to commit it to git! If you lose this file you won't be able to take part in the DKG ceremony.
+
+ If you are taking part in an organised Obol testnet, submit the created ENR public address (the console output starting with and including `enr:-`, not the contents of the private key file) to the appropriate typeform.
+
+## Performing a Distributed Validator Key Generation Ceremony
+
+To create the private keys for a distributed validator securely, a Distributed Key Generation (DKG) process must take place.
+
+1. After gathering each operators ENR and setting them in the `.env` file, one operator should prepare the ceremony with `charon create dkg`
+
+ ```sh
+
+ # First set the ENRs of all the operators participating in DKG ceremony in .env file as CHARON_OPERATOR_ENRS
+
+ # Create .charon/cluster-definition.json to participate in DKG ceremony
+ docker run --rm -v "$(pwd):/opt/charon" --env-file .env ghcr.io/obolnetwork/charon:v0.8.1 create dkg
+ ```
+2. The operator that ran this command should distribute the resulting `cluster-definition.json` file to each operator.
+3. At a pre-agreed time, all operators run the ceremony program with the `charon dkg` command
+
+ ```sh
+ # Copy the cluster-definition.json file to .charon
+ cp cluster-definition.json .charon/
+
+ # Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys/
+ docker run --rm -v "$(pwd):/opt/charon" ghcr.io/obolnetwork/charon:v0.8.1 dkg
+ ```
+
+## Verifying cluster health
+
+Once the key generation ceremony has been completed, the charon nodes have the data they need to come together to form a cluster.
+
+1. First you must prepare the required environment variables, in particular you need to set the `CHARON_BEACON_NODE_ENDPOINT` variable to point at either a local or remote beacon node API endpoint.
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL for the network you want to use (this repo expects `prater`):
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+2. Start your distributed validator node with docker-compose
+
+ ```sh
+ # Run a charon client, a vc client, and prom+grafana clients as containers
+ docker-compose up
+ ```
+3. Use the pre-prepared [grafana](http://localhost:3000/) dashboard to verify the cluster health looks okay. You should see connections with all other operators in the cluster as healthy, and observed ping times under 1 second for all connections.
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/singlenode
+ ```
+
+ If Grafana doesn't load any data the first time you open it, check [this method](https://github.com/ObolNetwork/charon-distributed-validator-node#grafana-doesnt-load-any-data) for fixing the issue.
+
+## Activating the distributed validator
+
+Once the cluster is healthy and fully connected, it is time to deposit the required 32 (test) ether to activate the newly created Distributed Validator.
+
+1. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+ * A more distributed validator friendly deposit interface is in the works for an upcoming release.
+2. This process takes approximately 16 hours for the deposit to be registered on the beacon chain. Future upgrades to the protocol aims to reduce this time.
+3. Once the validator deposit is recognised on the beacon chain, the validator is assigned an index, and the wait for activation begins.
+4. Finally, once the validator is activated, it should be monitored for to ensure it is achieving an inclusion distance of near 0, to ensure optimal rewards. You should also tweet the link to your newly activated validator with the hashtag [#RunDVT](https://twitter.com/search?q=%2523RunDVT) 🙃
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.8.1/int/working-groups.md b/docs/versioned_docs/version-v0.8.1/int/working-groups.md
new file mode 100644
index 0000000000..0302cd633a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 5
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 70%
+- Phase 1: 65%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.8.1/intro.md b/docs/versioned_docs/version-v0.8.1/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.8.1/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.8.1/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..92e96695e1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://ethresear.ch/t/0x03-withdrawal-credentials-simple-eth1-triggerable-withdrawals/10021) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.8.1/sc/README.md b/docs/versioned_docs/version-v0.8.1/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.8.1/testnet.md b/docs/versioned_docs/version-v0.8.1/testnet.md
new file mode 100644
index 0000000000..1f08fd224f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.8.1/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs have and will continue to coordinate and host a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [x] [Dev Net 1](testnet.md#devnet-1)
+* [x] [Dev Net 2](testnet.md#devnet-2)
+* [ ] [Athena Public Testnet 1](testnet.md#athena-public-testnet-1)
+* [ ] [Bia Attack net](testnet.md#bia-attack-net)
+* [ ] [Circe Public Testnet 2](testnet.md#cerce-public-testnet-ii)
+* [ ] [Demeter Red/Blue net](testnet.md#demeter-redblue-net)
+
+### Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with a remote consensus client. The keys were created locally in charon, and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a network.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators avoided exposing charon to the public internet on a static IP address through the use of Obol hosted relay nodes.
+
+This devnet was also the first time `charon dkg` was tested with users. The launchpad was not used, and this dkg was triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet was to collect network performance data. This was the first time charon was run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet was a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Target start date:** August 2022
+
+**Duration:** 2 week cluster setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+**Registration Form:** [Here](https://obol.typeform.com/AthenaTestnet)
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** September 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** Q4 2022
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** Q4 2022
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v0.9.0/README.md b/docs/versioned_docs/version-v0.9.0/README.md
new file mode 100644
index 0000000000..a11a901065
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/README.md
@@ -0,0 +1,2 @@
+# version-v0.9.0
+
diff --git a/docs/versioned_docs/version-v0.9.0/cg/README.md b/docs/versioned_docs/version-v0.9.0/cg/README.md
new file mode 100644
index 0000000000..4f4e9eba0e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/cg/README.md
@@ -0,0 +1,2 @@
+# cg
+
diff --git a/docs/versioned_docs/version-v0.9.0/cg/bug-report.md b/docs/versioned_docs/version-v0.9.0/cg/bug-report.md
new file mode 100644
index 0000000000..eda3693761
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/cg/bug-report.md
@@ -0,0 +1,57 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hinderance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualise the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```sh
+
+
+## Expected Behaviour
+
+
+## Current Behaviour
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```bash
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
+
diff --git a/docs/versioned_docs/version-v0.9.0/dv/01_introducing-charon.md b/docs/versioned_docs/version-v0.9.0/dv/01_introducing-charon.md
new file mode 100644
index 0000000000..a30de97d62
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dv/01_introducing-charon.md
@@ -0,0 +1,29 @@
+---
+description: Charon - The Distributed Validator Client
+---
+
+# Introducing Charon
+
+This section introduces and outlines the Charon middleware. For additional context regarding distributed validator technology, see [this section](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.9.0/int/key-concepts/README.md#distributed-validator) of the key concept page.
+
+### What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and it's connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+### Charon architecture
+
+The below graphic visually outlines the internal functionalies of Charon.
+
+
+
+### Get started
+
+The `charon` client is in an early alpha state, and is not ready for mainnet, see [here](https://github.com/ObolNetwork/charon#supported-consensus-layer-clients) for the latest on charon's readiness.
+
+```
+docker run obolnetwork/charon:v0.9.0 --help
+```
+
+For more information on running charon, take a look at our [quickstart guide](../int/quickstart/index.md).
diff --git a/docs/versioned_docs/version-v0.9.0/dv/02_validator-creation.md b/docs/versioned_docs/version-v0.9.0/dv/02_validator-creation.md
new file mode 100644
index 0000000000..f13437b26d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dv/02_validator-creation.md
@@ -0,0 +1,31 @@
+---
+description: Creating a Distributed Validator cluster from scratch
+---
+
+# Distributed validator creation
+
+
+
+### Stages of creating a distributed validator
+
+To create a distributed validator cluster, you and your group of operators need to complete the following steps:
+
+1. One operator begins the cluster setup on the [Distributed Validator Launchpad](../dvk/02_distributed_validator_launchpad.md).
+ * This involves setting all of the terms for the cluster, including; withdrawal address, fee recipient, validator count, operator addresses, etc. This information is known as a _cluster configuration_.
+ * This operator also sets their charon client's Ethereum Node Record ([ENR](../int/faq.md#what-is-an-enr)).
+ * This operator signs both the hash of the cluster config and the ENR to prove custody of their address.
+ * This data is stored in the DV Launchpad data layer and a shareable URL is generated. This is a link for the other operators to join and complete the ceremony.
+2. The other operators in the cluster follow this URL to the launchpad.
+ * They review the terms of the cluster configuration.
+ * They submit the ENR of their charon client.
+ * They sign both the hash of the cluster config and their charon ENR to indicate acceptance of the terms.
+3. Once all operators have submitted signatures for the cluster configuration and ENRs, they can all download the cluster definition file.
+4. Every operator passes this cluster definition file to the `charon dkg` command. The definition provides the charon process with the information it needs to find and complete the DKG ceremony with the other charon clients involved.
+5. Once all charon clients can communicate with one another, the DKG process completes. All operators end up with:
+ * A `cluster-lock.json` file, which contains the original cluster configuration data, combined with the newly generated group public keys and their associated public key shares. This file is needed by the `charon run` command.
+ * Validator deposit data
+ * Validator private key shares
+6. Operators can now take backups of the generated private key shares, their ENR private key if they have not yet done so, and the `cluster-lock.json` file.
+7. All operators load the keys and cluster lockfiles generated in the ceremony, into their staking deployments.
+8. Operators can run a performance test of the configured cluster to ensure connectivity between all operators at a reasonable latency is observed.
+9. Once all readiness tests have passed, one operator activates the distributed validator(s) with an on-chain deposit.
diff --git a/docs/versioned_docs/version-v0.9.0/dv/04_middleware-daemon.md b/docs/versioned_docs/version-v0.9.0/dv/04_middleware-daemon.md
new file mode 100644
index 0000000000..eddc58cf9e
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dv/04_middleware-daemon.md
@@ -0,0 +1,15 @@
+---
+description: Deployment Architecture for a Distributed Validator Client
+---
+
+# Middleware Architecture
+
+
+
+The Charon daemon sits as a middleware between the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/) and any downstream validator clients.
+
+### Operation
+
+The middleware strives to be stateless and statically configured through 777 file systems. The lack of a control-plane API for online reconfiguration is deliberate to keep operations simple and secure by default.
+
+The `charon` package will initially be available as a Docker image and through binary builds. An APT package with a systemd integration is planned.
diff --git a/docs/versioned_docs/version-v0.9.0/dv/06_peer-discovery.md b/docs/versioned_docs/version-v0.9.0/dv/06_peer-discovery.md
new file mode 100644
index 0000000000..9ea67f7faf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dv/06_peer-discovery.md
@@ -0,0 +1,37 @@
+---
+description: How do distributed validator clients communicate with one another securely?
+---
+
+# Peer discovery
+
+In order to maintain security and sybil-resistance, charon clients need to be able to authenticate one another. We achieve this by giving each charon client a public/private key pair that they can sign with such that other clients in the cluster will be able to recognise them as legitimate no matter which IP address they communicate from.
+
+At the end of a [DKG ceremony](./02_validator-creation.md#stages-of-creating-a-distributed-validator), each operator will have a number of files outputted by their charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+### Authenticating a distributed validator client
+
+Before a DKG process begins, all operators must run `charon create enr`, or just `charon enr`, to create or get the Ethereum Node Record for their client. These ENRs are included in the configuration of a Distributed Key Generation ceremony.
+
+The file that outlines a DKG ceremony is known as a [`cluster-definition`](./08_distributed-validator-cluster-manifest.md) file. This file is passed to `charon dkg` which uses it to create private keys, a cluster lock file and deposit data for the configured number of distributed validators. The cluster-lock file will be made available to `charon run`, and the validator key stores will be made available to the configured validator client.
+
+When `charon run` starts up and ingests its configuration from the `cluster-lock.json` file, it checks if its observed/configured public IP address differs from what is listed in the lock file. If it is different; it updates the IP address, increments the nonce of the ENR and reissues it before beginning to establish connections with the other operators in the cluster.
+
+#### Node database
+
+Distributed Validator Clusters are permissioned networks with a fully meshed topology. Each node will permanently store the ENRs of all other known Obol nodes in their node database.
+
+Unlike with node databases of public permissionless networks (such as [Go-Ethereum](https://pkg.go.dev/github.com/ethereum/go-ethereum@v1.10.13/p2p/enode#DB)), there is no inbuilt eviction logic – the database will keep growing indefinitely. This is acceptable as the number of operators in a cluster is expected to stay constant. Mutable cluster operators will be introduced in future.
+
+#### Node discovery
+
+At boot, a charon client will ingest it's configured `cluster-lock.json` file. This file contains a list of ENRs of the client's peers. The client will attempt to establish a connection with these peers, and will perform a handshake if they connect to establish an end to end encrypted communication channel between the clients.
+
+However, the IP addresses within an ENR can become stale. This could result in a cluster not being able to establish a connection with all nodes. To be tolerant to operator IP addresses changing, charon also supports the [discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md) discovery protocol. This allows a charon client to find another operator that might have moved IP address, but still retains the same ENR private key.
+
+
diff --git a/docs/versioned_docs/version-v0.9.0/dv/07_p2p-interface.md b/docs/versioned_docs/version-v0.9.0/dv/07_p2p-interface.md
new file mode 100644
index 0000000000..50de00d79a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dv/07_p2p-interface.md
@@ -0,0 +1,13 @@
+---
+description: Connectivity between Charon instances
+---
+
+# P2P interface
+
+The Charon P2P interface loosely follows the [Eth2 beacon P2P interface](https://github.com/ethereum/consensus-specs/blob/dev/specs/phase0/p2p-interface.md).
+
+- Transport: TCP over IPv4/IPv6.
+- Identity: [Ethereum Node Records](https://eips.ethereum.org/EIPS/eip-778).
+- Handshake: [noise-libp2p](https://github.com/libp2p/specs/tree/master/noise) with `secp256k1` keys.
+ - Each charon client must have their ENR public key authorized in a [cluster-lock.json](./08_distributed-validator-cluster-manifest.md) file in order for the client handshake to succeed.
+- Discovery: [Discv5](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md).
diff --git a/docs/versioned_docs/version-v0.9.0/dv/08_distributed-validator-cluster-manifest.md b/docs/versioned_docs/version-v0.9.0/dv/08_distributed-validator-cluster-manifest.md
new file mode 100644
index 0000000000..9c2b959a44
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dv/08_distributed-validator-cluster-manifest.md
@@ -0,0 +1,65 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+---
+
+# Cluster Configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a charon client (or cluster) locally or in production.
+
+## Cluster Configuration Files
+
+A charon cluster is configured in two steps:
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+The `charon create dkg` command is used to create `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The `charon create cluster` command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+The schema of the `cluster-definition.json` is defined as:
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "nonce": 1, // Nonce (incremented each time the ENR is added/signed)
+ "config_signature": "123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.0.0", // Schema version
+ "num_validators": 100, // Number of distributed validators to be created in cluster.lock
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "fee_recipient_address":"0x123..abfc", // ETH1 fee_recipient address
+ "withdrawal_address": "0x123..abfc", // ETH1 withdrawal address
+ "dkg_algorithm": "foo_dkg_v1" , // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "abcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "abcdef...abcedef" // Final Hash of all fields
+}
+```
+
+The above `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+The `cluster-lock.json` has the following schema:
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equaled to num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "oA8Z...2XyT", "g1q...icu"], // Public Key Shares
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
diff --git a/docs/versioned_docs/version-v0.9.0/dv/09_charon_cli_reference.md b/docs/versioned_docs/version-v0.9.0/dv/09_charon_cli_reference.md
new file mode 100644
index 0000000000..b235983f69
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dv/09_charon_cli_reference.md
@@ -0,0 +1,205 @@
+---
+description: A go-based middleware client for taking part in Distributed Validator clusters.
+---
+
+# Charon CLI reference
+
+:::warning
+
+The `charon` client is under heavy development, interfaces are subject to change until a first major version is published.
+
+:::
+
+The following is a reference for charon version [`v0.9.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.9.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+### Available Commands
+
+The following are the top-level commands available to use.
+
+```markdown
+charon help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ bootnode Start a discv5 bootnode server
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print this client's Ethereum Node Record
+ help Help about any command
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+### `create` subcommand
+
+The `create` subcommand handles the creation of artifacts needed by charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+
+```
+
+#### Creating an ENR for charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this charon client to its other counterparty charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ -h, --help Help for enr
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+```
+
+#### Create a full cluster locally
+
+`charon create cluster` creates a set of distributed validators locally, including the private keys, a `cluster.lock` file, and deposit and exit data. However, this command should only be used for solo use of distributed validators. To run a Distributed Validator with a group of operators, it is preferable to create these artifacts using the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and a deposit-data.json. See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --clean Delete the cluster directory before generating it.
+ --cluster-dir string The target folder to create the cluster in. (default ".charon/cluster")
+ -h, --help Help for cluster
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ -n, --nodes int The number of charon nodes in the cluster. (default 4)
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Ethereum address to receive the returned stake and accrued rewards. (default "0x0000000000000000000000000000000000000000")
+```
+
+#### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --dkg-algorithm string DKG algorithm to use; default, keycast, frost (default "default")
+ --fee-recipient-address string Optional Ethereum address of the fee recipient
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, prater, kintsugi, kiln, gnosis. (default "prater")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings [REQUIRED] Comma-separated list of each operator's Charon ENR address.
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int The threshold required for signature reconstruction. Minimum is n-(ceil(n/3)-1). (default 3)
+ --withdrawal-address string Withdrawal Ethereum address (default "0x0000000000000000000000000000000000000000")
+```
+
+### Performing a DKG Ceremony
+
+Th `charon dkg` command takes a `cluster_definition.json` file that instructs charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit and exit data for each new distributed validator. The command outputs the `cluster.lock` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --definition-file string The path to the cluster definition file. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+```
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster.lock` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoint string Beacon node endpoint URL. Deprecated, please use beacon-node-endpoints.
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs.
+ --builder-api Enables the builder api. Will only produce builder blocks. Builder API must also be enabled on the validator client. Beacon node must be connected to a builder-relay to access the builder network.
+ --data-dir string The directory where charon will store all its internal data (default ".charon")
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing.
+ --jaeger-service string Service name used for jaeger tracing. (default "charon")
+ --lock-file string The path to the cluster lock file defining distributed validator cluster. (default ".charon/cluster-lock.json")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus, pprof). (default "127.0.0.1:3620")
+ --p2p-allowlist string Comma-separated list of CIDR subnets for allowing only certain peer connections. Example: 192.168.0.0/16 would permit connections to peers on your local network only. The default is to accept all connections.
+ --p2p-bootnode-relay Enables using bootnodes as libp2p circuit relays. Useful if some charon nodes are not have publicly accessible.
+ --p2p-bootnodes strings Comma-separated list of discv5 bootnode URLs or ENRs. (default [http://bootnode.gcp.obol.tech:3640/enr])
+ --p2p-bootnodes-from-lockfile Enables using cluster lock ENRs as discv5 bootnodes. Allows skipping explicit bootnodes if key generation ceremony included correct IPs.
+ --p2p-denylist string Comma-separated list of CIDR subnets for disallowing certain peer connections. Example: 192.168.0.0/16 would disallow connections to peers on your local network. The default is to accept all connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. (default [127.0.0.1:3610])
+ --p2p-udp-address string Listening UDP address (ip and port) for discv5 discovery. (default "127.0.0.1:3630")
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API. (default "127.0.0.1:3600")
+```
diff --git a/docs/versioned_docs/version-v0.9.0/dv/README.md b/docs/versioned_docs/version-v0.9.0/dv/README.md
new file mode 100644
index 0000000000..f4a6dbc17c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dv/README.md
@@ -0,0 +1,2 @@
+# dv
+
diff --git a/docs/versioned_docs/version-v0.9.0/dvk/01_distributed-validator-keys.md b/docs/versioned_docs/version-v0.9.0/dvk/01_distributed-validator-keys.md
new file mode 100644
index 0000000000..49e6557706
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dvk/01_distributed-validator-keys.md
@@ -0,0 +1,121 @@
+---
+Description: >-
+ Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+---
+
+# Distributed Validator Key Generation
+
+## Contents
+
+- [Overview](#overview)
+- [Actors involved](#actors-involved)
+- [Cluster Definition creation](#cluster-definition-creation)
+- [Carrying out the DKG ceremony](#carrying-out-the-dkg-ceremony)
+- [Backing up ceremony artifacts](#backing-up-the-ceremony-artifacts)
+- [Preparing for validator activation](#preparing-for-validator-activation)
+- [DKG verification](#dkg-verification)
+- [Appendix](#appendix)
+
+## Overview
+
+A distributed validator key is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together. (4 randomly chosen points on a graph don't all neccessarily sit on the same order three curve.) To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a distributed key generation ceremony.
+
+The charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](https://docs.obol.tech/docs/dv/distributed-validator-cluster-manifest).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign with this address's private key to authenticate their charon client ahead of the ceremony. The signature will be of; a hash of the charon clients ENR public key, the `cluster_definition_hash`, and an incrementing `nonce`, allowing for a direct linkage between a user, their charon client, and the cluster this client is intended to service, while retaining the ability to update the charon client by incrementing the nonce value and re-signing like the standard ENR spec.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p-noise). These keys need to be created by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This definition file is created with the help of the [Distributed Validator Launchpad](https://docs.obol.tech/docs/dvk/distributed_validator_launchpad). The creation process involves a number of steps.
+
+- A `leader` Operator, that wishes to co-ordinate the creation of a new Distributed Validator Cluster navigates to the launch pad and selects "Create new Cluster"
+- The `leader` uses the user interface to configure all of the important details about the cluster including:
+ - The `withdrawal address` for the created validators
+ - The `feeRecipient` for block proposals if it differs from the withdrawal address
+ - The number of distributed validators to create
+ - The list of participants in the cluster specified by Ethereum address(/ENS)
+ - The threshold of fault tolerance required (if not choosing the safe default)
+ - The network (fork_version/chainId) that this cluster will validate on
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialised and merklised to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that their is no ambiguity or deviation between definitions when they are provided to charon nodes.
+- Once the leader is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralised backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralisation of the launchpad.)
+- The leader will then share the URL to this ceremony with their intended participants.
+- Anyone that clicks the ceremony url, or inputs the `cluster_definition_hash` when prompted on the landing page will be brought to the ceremony status page. (After completing all disclaimers and advisories)
+- A "Connect Wallet" button will be visible beneath the ceremony status container, a participant can click on it to connect their wallet to the site
+ - If the participant connects a wallet that is not in the participant list, the button disables, as there is nothing to do
+ - If the participant connects a wallet that is in the participant list, they get prompted to input the ENR of their charon node.
+ - If the ENR field is populated and validated the participant can now see a "Confirm Cluster Configuration" button. This button triggers one/two signatures.
+ - The participant signs the `cluster_definition_hash`, to prove they are consensting to this exact configuration.
+ - The participant signs their charon node's ENR, to authenticate and authorise that specific charon node to participate on their behalf in the distributed validator cluster.
+ - These/this signature is sent to the data availability layer, where it verifies the signatures are correct for the given participants ethereum address. If the signatures pass validation, the signature of the definition hash and the the ENR + signature get saved to the definition object.
+- All participants in the list must sign the definition hash and submit a signed ENR before a DKG ceremony can begin. The outstanding signatures can be easily displayed on the status page.
+- Finally, once all participants have signed their approval, and submitted a charon node ENR to act on their behalf, the definition data can be downloaded as a file if the users click a newly displayed button, `Download Manifest`.
+- At this point each participant must load this definition into their charon client, and the client will attempt to complete the DKG.
+
+## Carrying out the DKG ceremony
+
+Once participant has their definition file prepared, they will pass the file to charon's `dkg` command. Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to bootnodes that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen be a charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, charon does the work and outputs the following files to each machine and then exits.
+
+```sh
+# Common data
+.charon/cluster-definition.json # The original definition file from the DV Launchpad or `charon create dkg`
+.charon/cluster-lock.json # New lockfile based on cluster-definition.json with validator group public keys and threshold BLS verifiers included with the initial cluster config
+.charon/deposit-data.json # JSON file of deposit data for the distributed validators
+
+# Sensitive operator-specific data
+.charon/charon-enr-private-key # Created before the ceremony took place [Back this up]
+.charon/validator_keys/ # Folder of key shares to be backed up and moved to validator client [Back this up]
+```
+
+## Backing up the ceremony artifacts
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favour of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## Preparing for validator activation
+
+Once the ceremony is complete, and secure backups of key shares have been made by each operator. They must now load these key shares into their validator clients, and run the `charon run` command to turn it into operational mode.
+
+All operators should confirm that their charon client logs indicate all nodes are online and connected. They should also verify the readiness of their beacon clients and validator clients. Charon's grafana dashboard is a good way to see the readiness of the full cluster from its perspective.
+
+Once all operators are satisfied with network connectivity, one member can use the Obol Distributed Validator deposit flow to send the required ether and deposit data to the deposit contract, beginning the process of a distributed validator activation. Good luck.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it doe not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Using DKG without the launchpad
+
+Charon clients can do a DKG with a definition file that does not contain operator signatures if you pass a `--no-verify` flag to `charon dkg`. This can be used for testing purposes when strict signature verification is not of the utmost importance.
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../dv/08_distributed-validator-cluster-manifest.md#cluster-configuration-files).
+
diff --git a/docs/versioned_docs/version-v0.9.0/dvk/02_distributed_validator_launchpad.md b/docs/versioned_docs/version-v0.9.0/dvk/02_distributed_validator_launchpad.md
new file mode 100644
index 0000000000..2472453911
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dvk/02_distributed_validator_launchpad.md
@@ -0,0 +1,15 @@
+---
+Description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# Distributed Validator launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network intends to develop and maintain a website that enables a group of users to come together and create these threshold keys.
+
+The DV Launchpad is being developed over a number of phases, coordinated by our [DV launchpad working group](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v0.9.0/int/working-groups/README.md), To participate in this effort, read through the page and sign up at the appropriate link.
diff --git a/docs/versioned_docs/version-v0.9.0/dvk/README.md b/docs/versioned_docs/version-v0.9.0/dvk/README.md
new file mode 100644
index 0000000000..c48e49fa5b
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/dvk/README.md
@@ -0,0 +1,2 @@
+# dvk
+
diff --git a/docs/versioned_docs/version-v0.9.0/fr/README.md b/docs/versioned_docs/version-v0.9.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v0.9.0/fr/eth.md b/docs/versioned_docs/version-v0.9.0/fr/eth.md
new file mode 100644
index 0000000000..71bbced763
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/fr/eth.md
@@ -0,0 +1,131 @@
+# Ethereum resources
+
+This page serves material necessary to catch up with the current state of Ethereum proof-of-stake development and provides readers with the base knowledge required to assist with the growth of Obol. Whether you are an expert on all things Ethereum or are new to the blockchain world entirely, there are appropriate resources here that will help you get up to speed.
+
+## **Ethereum fundamentals**
+
+### Introduction
+
+* [What is Ethereum?](http://ethdocs.org/en/latest/introduction/what-is-ethereum.html)
+* [How Does Ethereum Work Anyway?](https://medium.com/@preethikasireddy/how-does-ethereum-work-anyway-22d1df506369)
+* [Ethereum Introduction](https://ethereum.org/en/what-is-ethereum/)
+* [Ethereum Foundation](https://ethereum.org/en/foundation/)
+* [Ethereum Wiki](https://eth.wiki/)
+* [Ethereum Research](https://ethresear.ch/)
+* [Ethereum White Paper](https://github.com/ethereum/wiki/wiki/White-Paper)
+* [What is Hashing?](https://blockgeeks.com/guides/what-is-hashing/)
+* [Hashing Algorithms and Security](https://www.youtube.com/watch?v=b4b8ktEV4Bg)
+* [Understanding Merkle Trees](https://www.codeproject.com/Articles/1176140/Understanding-Merkle-Trees-Why-use-them-who-uses-t)
+* [Ethereum Block Architecture](https://ethereum.stackexchange.com/questions/268/ethereum-block-architecture/6413#6413)
+* [What is an Ethereum Token?](https://blockgeeks.com/guides/ethereum-token/)
+* [What is Ethereum Gas?](https://blockgeeks.com/guides/ethereum-gas-step-by-step-guide/)
+* [Client Implementations](https://eth.wiki/eth1/clients)
+
+## **ETH2 fundamentals**
+
+*Disclaimer: Because some parts of Ethereum consensus are still an active area of research and/or development, some resources may be outdated.*
+
+### Introduction and specifications
+
+* [The Explainer You Need to Read First](https://ethos.dev/beacon-chain/)
+* [Official Specifications](https://github.com/ethereum/eth2.0-specs)
+* [Annotated Spec](https://benjaminion.xyz/eth2-annotated-spec/)
+* [Another Annotated Spec](https://notes.ethereum.org/@djrtwo/Bkn3zpwxB)
+* [Rollup-Centric Roadmap](https://ethereum-magicians.org/t/a-rollup-centric-ethereum-roadmap/4698)
+
+### Sharding
+
+* [Blockchain Scalability: Why?](https://blockgeeks.com/guides/blockchain-scalability/)
+* [What Are Ethereum Nodes and Sharding](https://blockgeeks.com/guides/what-are-ethereum-nodes-and-sharding/)
+* [How to Scale Ethereum: Sharding Explained](https://medium.com/prysmatic-labs/how-to-scale-ethereum-sharding-explained-ba2e283b7fce)
+* [Sharding FAQ](https://eth.wiki/sharding/Sharding-FAQs)
+* [Sharding Introduction: R&D Compendium](https://eth.wiki/en/sharding/sharding-introduction-r-d-compendium)
+
+### Peer-to-peer networking
+
+* [Ethereum Peer to Peer Networking](https://geth.ethereum.org/docs/interface/peer-to-peer)
+* [P2P Library](https://libp2p.io/)
+* [Discovery Protocol](https://github.com/ethereum/devp2p/blob/master/discv5/discv5.md)
+
+### Latest News
+
+* [Ethereum Blog](https://blog.ethereum.org/)
+* [News from Ben Edgington](https://hackmd.io/@benjaminion/eth2_news)
+
+### Prater Testnet Blockchain
+
+* [Launchpad](https://prater.launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://prater.beaconcha.in/)
+
+### Mainnet Blockchain
+
+* [Launchpad](https://launchpad.ethereum.org/en/)
+* [Beacon Chain Explorer](https://beaconcha.in/)
+* [Another Beacon Chain Explorer](https://explorer.bitquery.io/eth2)
+* [Validator Queue Statistics](https://eth2-validator-queue.web.app/index.html)
+* [Slashing Detector](https://twitter.com/eth2slasher)
+
+### Client Implementations
+
+* [Prysm](https://github.com/prysmaticlabs/prysm) developed in Golang and maintained by [Prysmatic Labs](https://prysmaticlabs.com/)
+* [Lighthouse](https://github.com/sigp/lighthouse) developed in Rust and maintained by [Sigma Prime](https://sigmaprime.io/)
+* [Lodestar](https://github.com/ChainSafe/lodestar) developed in TypeScript and maintained by [ChainSafe Systems](https://chainsafe.io/)
+* [Nimbus](https://github.com/status-im/nimbus-eth2) developed in Nim and maintained by [status](https://status.im/)
+* [Teku](https://github.com/ConsenSys/teku) developed in Java and maintained by [ConsenSys](https://consensys.net/)
+
+## Other
+
+### Serenity concepts
+
+* [Sharding Concepts Mental Map](https://www.mindomo.com/zh/mindmap/sharding-d7cf8b6dee714d01a77388cb5d9d2a01)
+* [Taiwan Sharding Workshop Notes](https://hackmd.io/s/HJ_BbgCFz#%E2%9F%A0-General-Introduction)
+* [Sharding Research Compendium](http://notes.ethereum.org/s/BJc_eGVFM)
+* [Torus Shaped Sharding Network](https://ethresear.ch/t/torus-shaped-sharding-network/1720/8)
+* [General Theory of Sharding](https://ethresear.ch/t/a-general-theory-of-what-quadratically-sharded-validation-is/1730/10)
+* [Sharding Design Compendium](https://ethresear.ch/t/sharding-designs-compendium/1888/25)
+
+### Serenity research posts
+
+* [Sharding v2.1 Spec](https://notes.ethereum.org/SCIg8AH5SA-O4C1G1LYZHQ)
+* [Casper/Sharding/Beacon Chain FAQs](https://notes.ethereum.org/9MMuzWeFTTSg-3Tz_YeiBA?view)
+* [RETIRED! Sharding Phase 1 Spec](https://ethresear.ch/t/sharding-phase-1-spec-retired/1407/92)
+* [Exploring the Proposer/Collator Spec and Why it Was Retired](https://ethresear.ch/t/exploring-the-proposer-collator-split/1632/24)
+* [The Stateless Client Concept](https://ethresear.ch/t/the-stateless-client-concept/172/4)
+* [Shard Chain Blocks vs. Collators](https://ethresear.ch/t/shard-chain-blocks-vs-collators/429)
+* [Ethereum Concurrency Actors and Per Contract Sharding](https://ethresear.ch/t/ethereum-concurrency-actors-and-per-contract-sharding/375)
+* [Future Compatibility for Sharding](https://ethresear.ch/t/future-compatibility-for-sharding/386)
+* [Fork Choice Rule for Collation Proposal Mechanisms](https://ethresear.ch/t/fork-choice-rule-for-collation-proposal-mechanisms/922/8)
+* [State Execution](https://ethresear.ch/t/state-execution-scalability-and-cost-under-dos-attacks/1048)
+* [Fast Shard Chains With Notarization](https://ethresear.ch/t/as-fast-as-possible-shard-chains-with-notarization/1806/2)
+* [RANDAO Notary Committees](https://ethresear.ch/t/fork-free-randao/1835/3)
+* [Safe Notary Pool Size](https://ethresear.ch/t/safe-notary-pool-size/1728/3)
+* [Cross Links Between Main and Shard Chains](https://ethresear.ch/t/cross-links-between-main-chain-and-shards/1860/2)
+
+### Serenity-related conference talks
+
+* [Sharding Presentation by Vitalik from IC3-ETH Bootcamp](https://vod.video.cornell.edu/media/Sharding+-+Vitalik+Buterin/1_1xezsfb4/97851101)
+* [Latest Research and Sharding by Justin Drake from Tech Crunch](https://www.youtube.com/watch?v=J6xO7DH20Js)
+* [Beacon Casper Chain by Vitalik and Justin Drake](https://www.youtube.com/watch?v=GAywmwGToUI)
+* [Proofs of Custody by Vitalik and Justin Drake](https://www.youtube.com/watch?v=jRcS9D_gw_o)
+* [So You Want To Be a Casper Validator by Vitalik](https://www.youtube.com/watch?v=rl63S6kCKbA)
+* [Ethereum Sharding from EDCon by Justin Drake](https://www.youtube.com/watch?v=J4rylD6w2S4)
+* [Casper CBC and Sharding by Vlad Zamfir](https://www.youtube.com/watch?v=qDa4xjQq1RE&t=1951s)
+* [Casper FFG in Depth by Carl](https://www.youtube.com/watch?v=uQ3IqLDf-oo)
+* [Ethereum & Scalability Technology from Asia Pacific ETH meet up by Hsiao Wei](https://www.youtube.com/watch?v=GhuWWShfqBI)
+
+### Ethereum Virtual Machine
+
+* [What is the Ethereum Virtual Machine?](https://themerkle.com/what-is-the-ethereum-virtual-machine/)
+* [Ethereum VM](https://medium.com/@jeff.ethereum/go-ethereums-jit-evm-27ef88277520)
+* [Ethereum Protocol Subtleties](https://github.com/ethereum/wiki/wiki/Subtleties)
+* [Awesome Ethereum Virtual Machine](https://github.com/ethereum/wiki/wiki/Ethereum-Virtual-Machine-%28EVM%29-Awesome-List)
+
+### Ethereum-flavoured WebAssembly
+
+* [eWASM background, motivation, goals, and design](https://github.com/ewasm/design)
+* [The current eWASM spec](https://github.com/ewasm/design/blob/master/eth_interface.md)
+* [Latest eWASM community call including live demo of the testnet](https://www.youtube.com/watch?v=apIHpBSdBio)
+* [Why eWASM? by Alex Beregszaszi](https://www.youtube.com/watch?v=VF7f_s2P3U0)
+* [Panel: entire eWASM team discussion and Q&A](https://youtu.be/ThvForkdPyc?t=119)
+* [Ewasm community meetup at ETHBuenosAires](https://www.youtube.com/watch?v=qDzrbj7dtyU)
+
diff --git a/docs/versioned_docs/version-v0.9.0/fr/golang.md b/docs/versioned_docs/version-v0.9.0/fr/golang.md
new file mode 100644
index 0000000000..5bc5686805
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/fr/golang.md
@@ -0,0 +1,9 @@
+# Golang resources
+
+* [The Go Programming Language](https://www.amazon.com/Programming-Language-Addison-Wesley-Professional-Computing/dp/0134190440) \(Only recommended book\)
+* [Ethereum Development with Go](https://goethereumbook.org)
+* [How to Write Go Code](http://golang.org/doc/code.html)
+* [The Go Programming Language Tour](http://tour.golang.org/)
+* [Getting Started With Go](http://www.youtube.com/watch?v=2KmHtgtEZ1s)
+* [Go Official Website](https://golang.org/)
+
diff --git a/docs/versioned_docs/version-v0.9.0/glossary.md b/docs/versioned_docs/version-v0.9.0/glossary.md
new file mode 100644
index 0000000000..53bb274c27
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/glossary.md
@@ -0,0 +1,8 @@
+# Glossary
+This page elaborates on the various technical terminology featured throughout this manual. See a word or phrase that should be added? Let us know!
+
+### Consensus
+A collection of machines coming to agreement on what to sign together
+
+### Threshold signing
+Being able to sign a message with only a subset of key holders taking part - giving the collection of machines a level of fault tolerance.
diff --git a/docs/versioned_docs/version-v0.9.0/int/README.md b/docs/versioned_docs/version-v0.9.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v0.9.0/int/faq.md b/docs/versioned_docs/version-v0.9.0/int/faq.md
new file mode 100644
index 0000000000..85304fa80c
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/faq.md
@@ -0,0 +1,39 @@
+---
+sidebar_position: 10
+description: Frequently asked questions
+---
+
+# Frequently asked questions
+
+### Does Obol have a token?
+
+No. Distributed validators use only ether.
+
+### Can I keep my existing validator client?
+
+Yes. Charon sits as a middleware between a validator client and it's beacon node. All validators that implement the standard REST API will be supported, along with all popular client delivery software such as DAppNode [packages](https://dappnode.github.io/explorer/#/), Rocket Pool's [smart node](https://github.com/rocket-pool/smartnode), StakeHouse's [wagyu](https://github.com/stake-house/wagyu), and Stereum's [node launcher](https://stereum.net/development/#roadmap).
+
+### Can I migrate my existing validator into a distributed validator?
+
+It will be possible to split an existing validator keystore into a set of key shares suitable for a distributed validator, but it is a trusted distribution process, and if the old staking system is not safely shut down, it could pose a risk of double signing alongside the new distributed validator.
+
+In an ideal scenario, a distributed validator's private key should never exist in full in a single location.
+
+You can split an existing EIP-2335 keystore for a validator to migrate it to a distributed validator architecture with the `charon create cluster --split-existing-keys` command documented [here](../dv/09_charon_cli_reference.md#create-a-full-cluster-locally).
+
+### What is an ENR?
+
+An ENR is shorthand for an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778). It is a way to represent a node on a public network, with a reliable mechanism to update its information. At Obol we use ENRs to identify charon nodes to one another such that they can form clusters with the right charon nodes and not impostors.
+
+ENRs have private keys they use to sign updates to the [data contained](https://enr-viewer.com/) in their ENR. This private key is by default found at `.charon/charon-enr-private-key`, and should be kept secure, and not checked into version control. An ENR looks something like this:
+```
+enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+```
+
+### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/obol) too.
+
+### What's with the name Charon?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.9.0/int/key-concepts.md b/docs/versioned_docs/version-v0.9.0/int/key-concepts.md
new file mode 100644
index 0000000000..ea9f03aa99
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/key-concepts.md
@@ -0,0 +1,86 @@
+---
+sidebar_position: 3
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is provided by **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes the problem of single-point failure. Should <33% of the participating nodes in the DVT cluster go offline, the remaining active nodes are still able to come to consensus on what to sign and produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes geth, lighthouse, charon and lodestar.
+
+### Execution Client
+
+An execution client (formerly known as an Eth1 client) specialises in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/nethermind/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties.
+
+* Coming to consensus on a candidate duty for all validators to sign
+* Combining signatures from all validators into a distributed validator signature
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [charon](../dv/01_introducing-charon.md).
+
+### Validator Client
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus with.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A DVK ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata.
diff --git a/docs/versioned_docs/version-v0.9.0/int/overview.md b/docs/versioned_docs/version-v0.9.0/int/overview.md
new file mode 100644
index 0000000000..8e3fefcbcf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/overview.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 2
+description: An overview of the Obol network
+---
+
+# Overview
+
+### The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As the current chapter of Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mititgate these risks, community building and credible neutrality must be used as a primary design principles.
+
+Obol as a layer is focused on scaling consensus by providing permissionless access to Distributed Validators (DV's). We believe that DV's will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing configurations.
+
+Similar to how roll up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking networks that can be plugged into at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvk/01_distributed-validator-keys.md), a CLI tool and dApp for bootstrapping Distributed Validators
+* [Charon](../dv/01_introducing-charon.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner
+* [Obol Managers](../sc/01_introducing-obol-managers.md), a set of solidity smart contracts for the formation of Distributed Validators
+* [Obol Testnets](../testnet.md), a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+#### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+
+
+### The Vision
+
+The road to decentralising stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+#### V1 - Trusted Distributed Validators
+
+The first version of distibuted validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+#### V2 - Trustless Distributed Validators
+
+V1 of charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the deliniation between the two.
diff --git a/docs/versioned_docs/version-v0.9.0/int/quickstart/README.md b/docs/versioned_docs/version-v0.9.0/int/quickstart/README.md
new file mode 100644
index 0000000000..bd2483c7cf
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/quickstart/README.md
@@ -0,0 +1,2 @@
+# quickstart
+
diff --git a/docs/versioned_docs/version-v0.9.0/int/quickstart/index.md b/docs/versioned_docs/version-v0.9.0/int/quickstart/index.md
new file mode 100644
index 0000000000..bc22286f06
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/quickstart/index.md
@@ -0,0 +1,12 @@
+# Quickstart Guides
+
+:::warning
+Charon is in an early alpha state and is not ready to be run on mainnet
+:::
+
+There are two ways to test out a distributed validator; on your own, by running all of the required software as containers within docker, or you can run a distributed validator with a group of other node operators, where you each run only one validator client and charon client, and the charon clients communicate with one another over the public internet to operate the distributed validator. The second manner requires each operator to open a port on the internet for all charon nodes to communicate with one another optimally.
+
+The following are guides to getting started with our template repositories. The intention is to support every combination of beacon clients and validator clients with compose files.
+
+- [Running the full cluster alone.](./quickstart-alone.md)
+- [Running one node in a cluster with a group of other node operators.](./quickstart-group.md)
\ No newline at end of file
diff --git a/docs/versioned_docs/version-v0.9.0/int/quickstart/quickstart-alone.md b/docs/versioned_docs/version-v0.9.0/int/quickstart/quickstart-alone.md
new file mode 100644
index 0000000000..4f37d95d81
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/quickstart/quickstart-alone.md
@@ -0,0 +1,56 @@
+---
+sidebar_position: 4
+description: Run all nodes in a distributed validator cluster
+---
+
+# Run a cluster alone
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+1. Clone the [charon-distributed-validator-cluster](https://github.com/ObolNetwork/charon-distributed-validator-cluster) template repo and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-cluster.git
+
+ # Change directory
+ cd charon-distributed-validator-cluster/
+ ```
+2. Prepare the environment variables
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL, make sure Prater is selected in dropdown of ENDPOINTS:
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+3. Create the artifacts needed to run a testnet distributed validator cluster
+
+ ```sh
+ # Create a testnet distributed validator cluster
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.9.0 create cluster --withdrawal-address="0x000000000000000000000000000000000000dead"
+ ```
+4. Start the cluster
+
+ ```sh
+ # Start the distributed validator cluster
+ docker-compose up
+ ```
+5. Checkout the monitoring dashboard and see if things look all right
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/laEp8vupp
+ ```
+6. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/cluster/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+
+Congratulations, if this all worked you are now running a distributed validator cluster on a testnet. Try turning off a single node of the four with `docker stop` and see if the validator stays online or begins missing duties, to see for yourself the fault-tolerance that can be added to proof of stake validation with this new Distributed Validator Technology.
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.9.0/int/quickstart/quickstart-group.md b/docs/versioned_docs/version-v0.9.0/int/quickstart/quickstart-group.md
new file mode 100644
index 0000000000..efe0589d52
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/quickstart/quickstart-group.md
@@ -0,0 +1,125 @@
+---
+sidebar_position: 5
+description: Run one node in a multi-operator distributed validator cluster
+---
+
+# Run a cluster with others
+
+:::warning Charon is in an early alpha state and is not ready to be run on mainnet :::
+
+To create a distributed validator cluster with a group of other node operators requires five key steps:
+
+* Every operator prepares their software and gets their charon client's [ENR](../faq.md#what-is-an-enr)
+* One operator prepares the terms of the distributed validator key generation ceremony
+ * They select the network, the withdrawal address, the number of 32 ether distributed validators to create, and the ENRs of each operator taking part in the ceremony.
+ * In future, the DV launchpad will facilitate this process more seamlessly, with consent on the terms provided by all operators that participate.
+* Every operator participates in the DKG ceremony, and once successful, a number of cluster artifacts are created, including:
+ * The private key shares for each distributed validator
+ * The deposit data file containing deposit details for each distributed validator
+ * A `cluster-lock.json` file which contains the finalised terms of this cluster required by charon to operate.
+* Every operator starts their node with `charon run`, and uses their monitoring to determine the cluster health and connectivity
+* Once the cluster is confirmed to be healthy, deposit data files created during this process are activated on the [staking launchpad](https://launchpad.ethereum.org/).
+
+## Getting started with Charon
+
+1. Clone the [charon-distributed-validator-node](https://github.com/ObolNetwork/charon-distributed-validator-node) template repository from Github, and `cd` into the directory.
+
+ ```sh
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+ # Change directory
+ cd charon-distributed-validator-node/
+ ```
+2. Next create a private key for charon to use for its ENR
+
+ ```sh
+ # Create an ENR private key
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.9.0 create enr
+ ```
+
+ This command will print your charon client's ENR to the console. It should look something like:
+
+ ```
+ Created ENR private key: .charon/charon-enr-private-key
+ enr:-JG4QAgAOXjGFcTIkXBO30aUMzg2YSo1CYV0OH8Sf2s7zA2kFjVC9ZQ_jZZItdE8gA-tUXW-rWGDqEcoQkeJ98Pw7GaGAYFI7eoegmlkgnY0gmlwhCKNyGGJc2VjcDI1NmsxoQI6SQlzw3WGZ_VxFHLhawQFhCK8Aw7Z0zq8IABksuJEJIN0Y3CCPoODdWRwgj6E
+ ```
+
+ :::warning The ability to replace a deleted or compromised private key is limited at this point. Please make a secure backup of this private key if this distributed validator is important to you.\
+ :::
+
+ This record identifies your charon client no matter where it communicates from across the internet. It is required for the following step of creating a set of distributed validator private key shares amongst the cluster operators.
+
+ Please make sure to make a backup of the private key at .charon/charon-enr-private-key. Be careful not to commit it to git! If you lose this file you won't be able to take part in the DKG ceremony.
+
+ If you are taking part in an organised Obol testnet, submit the created ENR public address (the console output starting with and including `enr:-`, not the contents of the private key file) to the appropriate typeform.
+
+## Performing a Distributed Validator Key Generation Ceremony
+
+To create the private keys for a distributed validator securely, a Distributed Key Generation (DKG) process must take place.
+
+1. After gathering each operators ENR and setting them in the `.env` file, one operator should prepare the ceremony with `charon create dkg`
+
+ ```sh
+
+ # First set the ENRs of all the operators participating in DKG ceremony in .env file as CHARON_OPERATOR_ENRS
+
+ # Create .charon/cluster-definition.json to participate in DKG ceremony
+ docker run --rm -v "$(pwd):/opt/charon" --env-file .env obolnetwork/charon:v0.9.0 create dkg
+ ```
+2. The operator that ran this command should distribute the resulting `cluster-definition.json` file to each operator.
+3. At a pre-agreed time, all operators run the ceremony program with the `charon dkg` command
+
+ ```sh
+ # Copy the cluster-definition.json file to .charon
+ cp cluster-definition.json .charon/
+
+ # Participate in DKG ceremony, this will create .charon/cluster-lock.json, .charon/deposit-data.json and .charon/validator_keys/
+ docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v0.9.0 dkg
+ ```
+
+## Verifying cluster health
+
+Once the key generation ceremony has been completed, the charon nodes have the data they need to come together to form a cluster.
+
+1. First you must prepare the required environment variables, in particular you need to set the `CHARON_BEACON_NODE_ENDPOINT` variable to point at either a local or remote beacon node API endpoint.
+
+ ```sh
+ # Copy the sample environment variables
+ cp .env.sample .env
+ ```
+
+ For simplicities sake, this repo is configured to work with a remote Beacon node such as one from [Infura](https://infura.io/).
+
+ Create an Eth2 project and copy the `https` URL for the network you want to use (this repo expects `prater`):
+
+ 
+
+ Replace the placeholder value of `CHARON_BEACON_NODE_ENDPOINT` in your newly created `.env` file with this URL.
+2. Start your distributed validator node with docker-compose
+
+ ```sh
+ # Run a charon client, a vc client, and prom+grafana clients as containers
+ docker-compose up
+ ```
+3. Use the pre-prepared [grafana](http://localhost:3000/) dashboard to verify the cluster health looks okay. You should see connections with all other operators in the cluster as healthy, and observed ping times under 1 second for all connections.
+
+ ```sh
+ # Open Grafana
+ open http://localhost:3000/d/singlenode
+ ```
+
+ If Grafana doesn't load any data the first time you open it, check [this method](https://github.com/ObolNetwork/charon-distributed-validator-node#grafana-doesnt-load-any-data) for fixing the issue.
+
+## Activating the distributed validator
+
+Once the cluster is healthy and fully connected, it is time to deposit the required 32 (test) ether to activate the newly created Distributed Validator.
+
+1. Activate the validator on the testnet using the original [staking launchpad](https://prater.launchpad.ethereum.org/en/overview) site with the deposit data created at `.charon/deposit-data.json`.
+ * If you use Mac OS, `.charon` the default output folder, does not show up on the launchpad's "Upload Deposit Data" file picker. Rectify this by pressing `Command + Shift + .` (full stop). This should display hidden folders, allowing you to select the deposit file.
+ * A more distributed validator friendly deposit interface is in the works for an upcoming release.
+2. This process takes approximately 16 hours for the deposit to be registered on the beacon chain. Future upgrades to the protocol aims to reduce this time.
+3. Once the validator deposit is recognised on the beacon chain, the validator is assigned an index, and the wait for activation begins.
+4. Finally, once the validator is activated, it should be monitored for to ensure it is achieving an inclusion distance of near 0, to ensure optimal rewards. You should also tweet the link to your newly activated validator with the hashtag [#RunDVT](https://twitter.com/search?q=%2523RunDVT) 🙃
+
+:::tip Don't forget to be a good testnet steward and exit your validator when you are finished testing with it. :::
diff --git a/docs/versioned_docs/version-v0.9.0/int/working-groups.md b/docs/versioned_docs/version-v0.9.0/int/working-groups.md
new file mode 100644
index 0000000000..0302cd633a
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/int/working-groups.md
@@ -0,0 +1,146 @@
+---
+sidebar_position: 5
+description: Obol Network's working group structure.
+---
+
+# Working groups
+
+The Obol Network is a distributed consensus protocol and ecosystem with a mission to eliminate single points of technical failure risks on Ethereum via Distributed Validator Technology (DVT). The project has reached the point where increasing the community coordination, participation, and ownership will drive significant impact on the growth of the core technology. As a result, the Obol Labs team will open workstreams and incentives to the community, with the first working group being dedicated to the creation process of distributed validators.
+
+This document intends to outline what Obol is, how the ecosystem is structured, how it plans to evolve, and what the first working group will consist.
+
+## The Obol ecosystem
+
+The Obol Network consists of four core public goods:
+
+- **The DVK Launchpad** - a CLI tool and user interface for bootstrapping Distributed Validators
+
+- **Charon** - a middleware client that enables validators to run in a fault-tolerant, distributed manner
+
+- **Obol Managers** - a set of solidity smart contracts for the formation of Distributed Validators
+
+- **Obol Testnets** - a set of on-going public incentivised testnets that enable any sized operator to test their deployment before serving for the mainnet Obol Network
+
+## Working group formation
+
+Obol Labs aims to enable contributor diversity by opening the project to external participation. The contributors are then sorted into structured working groups early on, allowing many voices to collaborate on the standardisation and building of open source components.
+
+Each public good component will have a dedicated working group open to participation by members of the Obol community. The first working group is dedicated to the development of distributed validator keys and the DV Launchpad. This will allow participants to experiment with the Obol ecosystem and look for mutual long-term alignment with the project.
+
+The second working group will be focused on testnets after the first is completed.
+
+## The DVK working group
+
+The first working group that Obol will launch for participation is focused on the distributed validator key generation component of the Obol technology stack. This is an effort to standardize the creation of a distributed validator through EIPs and build a community launchpad tool, similar to the Eth2 Launchpad today (previously built by Obol core team members).
+
+The distributed validator key (DVK) generation is a critical core capability of the protocol and more broadly an important public good for a variety of extended use cases. As a result, the goal of the working group is to take a community-led approach in defining, developing, and standardizing an open source distributed validator key generation tool and community launchpad.
+
+This effort can be broadly broken down into three phases:
+- Phase 0: POC testing, POC feedback, DKG implementation, EIP specification & submission
+- Phase 1: Launchpad specification and user feedback
+- Phase 1.5: Complimentary research (Multi-operator validation)
+
+
+## Phases
+DVK WG members will have different responsibilities depending on their participation phase.
+
+### Phase 0 participation
+
+Phase 0 is focused on applied cryptography and security. The expected output of this phase is a CLI program for taking part in DVK ceremonies.
+
+Obol will specify and build an interactive CLI tool capable of generating distributed validator keys given a standardised configuration file and network access to coordinate with other participant nodes. This tool can be used by a single entity (synchronous) or a group of participants (semi-asynchronous).
+
+The Phase 0 group is in the process of submitting EIPs for a Distributed Validator Key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DKG process as the working group outlines.
+
+**Participant responsibilities:**
+- Implementation testing and feedback
+- DKG Algorithm feedback
+- Ceremony security feedback
+- Experience in Go, Rust, Solidity, or applied cryptography
+
+### Phase 1 participation
+
+Phase 1 is focused on the development of the DV LaunchPad, an open source SPA web interface for facilitating DVK ceremonies with authenticated counterparties.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs intends to develop and maintain a website that enables a group of users to generate the configuration required for a DVK generation ceremony.
+
+The Obol Labs team is collaborating with Deep Work Studio on a multi-week design and user feedback session that began on April 1st. The collaborative design and prototyping sessions include the Obol core team and genesis community members. All sessions will be recorded and published publicly.
+
+**Participant responsibilities:**
+- DV LaunchPad architecture feedback
+- Participate in 2 rounds of syncronous user testing with the Deep Work team (April 6-10 & April 18-22)
+- Testnet Validator creation
+
+### Phase 1.5 participation
+
+Phase 1.5 is focused on formal research on the demand and understanding of multi-operator validation. This will be a separate research effort that will be taken place by Georgia Rakusen. This research will be turned into a formal report and distributed for free to the ethereum community. Participation in Phase 1.5 is user interview based and involves psychology based testing. This effort began in early April.
+
+**Participant responsibilities:**
+- Complete asyncronous survey
+- Pass the survery to profile users to enhance the depth of the research effort
+- Produce design assets for the final resarch artifact
+
+## Phase progress
+
+The Obol core team has begun work on all three phases of the effort, and will present draft versions as well as launch Discord channels for each phase when relevant. Below is a status update of where the core team is with each phase as of today.
+
+**Progress:**
+
+- Phase 0: 70%
+- Phase 1: 65%
+- Phase 1.5: 30%
+
+The core team plans to release the different phases for proto community feedback as they approach 75% completion.
+
+## Working group key objectives
+
+The deliverables of this working group are:
+
+### 1. Standardize the format of DVKs through EIPs
+
+One of the many successes in the Ethereum development community is the high levels of support from all client teams around standardised file formats. It is critical that we all work together as a working group on this specific front.
+
+Two examples of such standards in the consensus client space include:
+
+- EIP-2335: A JSON format for the storage and interchange of BLS12-381 private keys
+- EIP-3076: Slashing Protection Interchange Format
+distributed validator key encoding scheme in line with EIP-2335, and a new EIP for encoding the configuration and secrets needed for a DV Cluster that has outputs based on the working groups feedback. Outputs from the DVK ceremony may include:
+
+- Signed validator deposit data files
+- Signed exit validator messages
+- Private key shares for each operator's validator client
+- Distributed Validator Cluster manifests to bind each node together
+
+### 2. A CLI program for distributed validator key (DVK) ceremonies
+
+One of the key successes of Proof of Stake Ethereum's launch was the availability of high quality CLI tools for generating Ethereum validator keys including eth2.0-deposit-cli and ethdo.
+
+The working group will ship a similar CLI tool capable of generating distributed validator keys given a standardised configuration and network access to coordinate with other participant nodes.
+
+As of March 1st, the WG is testing a POC DKG CLI based on Kobi Gurkan's previous work. In the coming weeks we will submit EIPs and begin to implement our DKG CLI in line with our V0.5 specs and the WG's feedback.
+
+### 3. A Distributed validator launchpad
+
+To activate an Ethereum validator you need to deposit 32 ether into the official deposit contract. The vast majority of users that created validators to date have used the **[~~Eth2~~ Staking Launchpad](https://launchpad.ethereum.org/)**, a public good open source website built by the Ethereum Foundation and participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, Obol Labs will host and maintain a website that enables a group of users to generate distributed validator keys together using a DKG ceremony in-browser.
+
+Over time, the DV LaunchPads features will primarily extended the spectrum of trustless key generation. The V1 featues of the launchpad can be user tested and commented on by anyone in the Obol Proto Community!
+
+## Working group participants
+
+The members of the Phase 0 working group are:
+
+- The Obol genesis community
+- Ethereum Foundation (Carl, Dankrad, Aditya)
+- Ben Edgington
+- Jim McDonald
+- Prysmatic Labs
+- Sourav Das
+- Mamy Ratsimbazafy
+- Kobi Gurkan
+- Coinbase Cloud
+
+The intended universe of Phase 1 & Phase 1.5 will launch with no initial members, though they will immediately be available for submission by participants that have joined the Obol Proto community right [here](https://pwxy2mff03w.typeform.com/to/Kk0TfaYF). Everyone can join the proto community; however, working group participation will be based on relevance and skill set.
+
+
diff --git a/docs/versioned_docs/version-v0.9.0/intro.md b/docs/versioned_docs/version-v0.9.0/intro.md
new file mode 100644
index 0000000000..93c3f09525
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/intro.md
@@ -0,0 +1,20 @@
+---
+sidebar_position: 1
+description: Welcome to the Multi-Operator Validator Network
+---
+
+# Introduction
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 10 members that are spread across the world.
+
+The core team is building the Obol Network, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## About this documentation
+
+This manual is aimed at developers and stakers looking to utilize the Obol Network for multi-party staking. To contribute to this documentation, head over to our [Github repository](https://github.com/ObolNetwork/obol-docs) and file a pull request.
+
+## Need assistance?
+
+If you have any questions about this documentation or are experiencing technical problems with any Obol-related projects, head on over to our [Discord](https://discord.gg/n6ebKsX46w) where a member of our team or the community will be happy to assist you.
diff --git a/docs/versioned_docs/version-v0.9.0/sc/01_introducing-obol-managers.md b/docs/versioned_docs/version-v0.9.0/sc/01_introducing-obol-managers.md
new file mode 100644
index 0000000000..92e96695e1
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/sc/01_introducing-obol-managers.md
@@ -0,0 +1,59 @@
+---
+description: How does the Obol Network look on-chain?
+---
+
+# Obol Manager Contracts
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators.
+
+## Withdrawal Recipients
+
+The key to a distributed validator is understanding how a withdrawal is processed. The most common way to handle a withdrawal of a validator operated by a number of different people is to use an immutable withdrawal recipient contract, with the distribution rules hardcoded into it.
+
+For the time being Obol uses `0x01` withdrawal credentials, and intends to upgrade to [0x03 withdrawal credentials](https://ethresear.ch/t/0x03-withdrawal-credentials-simple-eth1-triggerable-withdrawals/10021) when smart contract initiated exits are enabled.
+
+### Ownable Withdrawal Recipient
+
+```solidity title="WithdrawalRecipientOwnable.sol"
+// SPDX-License-Identifier: MIT
+
+pragma solidity ^0.8.0;
+
+import "@openzeppelin/contracts/access/Ownable.sol";
+
+contract WithdrawalRecipientOwnable is Ownable {
+ receive() external payable {}
+
+ function withdraw(address payable recipient) public onlyOwner {
+ recipient.transfer(address(this).balance);
+ }
+}
+
+```
+
+An Ownable Withdrawal Recipient is the most basic type of withdrawal recipient contract. It implements Open Zeppelin's `Ownable` interface and allows one address to call the `withdraw()` function, which pulls all ether from the address into the owners address (or another address specified). Calling withdraw could also fund a fee split to the Obol Network, and/or the protocol that has deployed and instantiated this DV.
+
+### Immutable Withdrawal Recipient
+
+An immutable withdrawal recipient is similar to an ownable recipient except the owner is hardcoded during construction and the ability to change ownership is removed. This contract should only be used as part of a larger smart contract system, for example a yearn vault strategy might use an immutable recipient contract as its vault address should never change.
+
+## Registries
+
+### Deposit Registry
+
+The Deposit Registry is a way for the deposit and activation of distributed validators to be two separate processes. In the simple case for DVs, a registry of deposits is not required. However when the person depositing the ether is not the same entity as the operators producing the deposits, a coordination mechanism is needed to make sure only one 32 eth deposit is submitted per DV. A deposit registry can prevent double deposits by ordering the allocation of ether to validator deposits.
+
+### Operator Registry
+
+If the submission of deposits to a deposit registry needs to be gated to only whitelisted addresses, a simple operator registry may serve as a way to control who can submit deposits to the deposit registry.
+
+### Validator Registry
+
+If validators need to be managed on chain programatically rather than manually with humans triggering exits, a validator registry can be used. Deposits getting activated get an entry into the validator registry, and validators using 0x03 exits get staged for removal from the registry. This registry can be used to coordinate many validators with similar operators and configuration.
+
+:::note
+
+Validator registries depend on the as of yet unimplemented `0x03` validator exit feature.
+
+:::
+
diff --git a/docs/versioned_docs/version-v0.9.0/sc/README.md b/docs/versioned_docs/version-v0.9.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v0.9.0/testnet.md b/docs/versioned_docs/version-v0.9.0/testnet.md
new file mode 100644
index 0000000000..1f08fd224f
--- /dev/null
+++ b/docs/versioned_docs/version-v0.9.0/testnet.md
@@ -0,0 +1,189 @@
+---
+sidebar_position: 13
+---
+
+# testnet
+
+## Testnets
+
+
+
+Over the coming quarters, Obol Labs have and will continue to coordinate and host a number of progressively larger testnets to help harden the charon client and iterate on the key generation tooling.
+
+The following is a break down of the intended testnet roadmap, the features that are to be complete by each testnet, and their target start date and durations.
+
+## Testnets
+
+* [x] [Dev Net 1](testnet.md#devnet-1)
+* [x] [Dev Net 2](testnet.md#devnet-2)
+* [ ] [Athena Public Testnet 1](testnet.md#athena-public-testnet-1)
+* [ ] [Bia Attack net](testnet.md#bia-attack-net)
+* [ ] [Circe Public Testnet 2](testnet.md#cerce-public-testnet-ii)
+* [ ] [Demeter Red/Blue net](testnet.md#demeter-redblue-net)
+
+### Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch) on a single machine, with a remote consensus client. The keys were created locally in charon, and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+* User test a first tutorial flow to get the kinks out of it. Devnet 2 will be a group flow, so we need to get the solo flow right first
+* Prove the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works
+* Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to charon running across a network.
+
+**Test Artifacts:**
+
+* Responding to a typeform, an operator will list:
+ * The public key of the distributed validator
+ * Any difficulties they incurred in the cluster instantiation
+ * Any deployment variations they would like to see early support for (e.g. windows, cloud, dappnode etc.)
+
+### Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows _together_ for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using docker compose to spin up 4 charon clients and 4 different validator clients (lighthouse, teku, lodestar and vouch), each on their own machine at each operators home or their place of choosing, running at least a kiln consensus client.
+
+As part of this testnet, operators avoided exposing charon to the public internet on a static IP address through the use of Obol hosted relay nodes.
+
+This devnet was also the first time `charon dkg` was tested with users. The launchpad was not used, and this dkg was triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+
+A core focus of this devnet was to collect network performance data. This was the first time charon was run in variable, non-virtual networks (i.e. the real internet). Focusing on effective collection of performance data in this devnet was a core focus, to enable gathering even higher signal performance data at scale during public testnets.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+* User test a first dkg flow
+* User test the complexity of exposing charon to the public internet
+* Have block proposals in place
+* Build up the analytics plumbing to injest network traces from dump files or distributed tracing endpoints
+
+### Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 is to get distributed validators into the hands of the wider Proto Community for the first time.
+
+The core focus of this testnet is the onboarding experience. This is the first time we would need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) in as many languages as possible (need to engage language moderators on discord).
+
+The core output from this testnet is a large number of typeform submissions, for a feedback form we have refined since devnets 1 and 2.
+
+This will be an unincentivised testnet, and will form as the basis for us figuring out a sybil resistance mechanism for later incentivised testnets.
+
+**Participants:** Obol Proto Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Target start date:** August 2022
+
+**Duration:** 2 week cluster setup, 4 weeks operation
+
+**Goals:**
+
+* Engage Obol Proto Community
+* Make deploying Ethereum validator nodes accessible
+* Generate a huge backlog of bugs, feature requests, platform requests and integration requests
+
+**Registration Form:** [Here](https://obol.typeform.com/AthenaTestnet)
+
+### Bia Attack Net
+
+At this point, we have tested best-effort, happy-path validation with supportive participants. The next step towards a mainnet ready client is to begin to disrupt and undermine it as much as possible.
+
+This testnet needs a consensus implementation as a hard requirement, where it may have been optional for Athena. The intention is to create a number of testing tools to facilitate the disruption of charon, including releasing a p2p network abuser, a fuzz testing client, k6 scripts for load testing/hammering an RPC endpoints and more.
+
+The aim is to find as many memory leaks, DoS vulnerable endpoints and operations, missing signature verifications and more. This testnet may be centered around a hackathon if suitable.
+
+**Participants:** Obol Proto Community, Immunefi Bug Bounty searchers
+
+**State:** Client Hardening
+
+**Network:** Kiln or a Merged Test Network (e.g. Görli)
+
+**Target start date:** September 2022
+
+**Duration:** 2-4 weeks operation, depending on how resilient the clients are
+
+**Network:** Merged Test Network (e.g. Görli)
+
+**Goals:**
+
+* Break charon in multiple ways
+* Improve DoS resistance
+
+### Cerce Public Testnet II
+
+After working through the vulnerabilities hopefully surfaced during the attack net, it becomes time to take the stakes up a notch. The second public testnet for Obol will be in partnership with the Gnosis Chain, and will use validators with real skin in the game.
+
+This is intended to be the first time that Distributed Validator tokenisation comes into play. Obol intends to let candidate operators form groups, create keys that point to pre-defined Obol controlled withdrawal addresses, and submit a typeform application to our testnet team including their created deposit data and manifest lockfile and exit data. (So we can verify the validator pubkey they are submitting is a DV)
+
+Once the testnet team has verified the operators as real humans not sybil attacking the testnet that have created legitimate DV keys, their validator will be activated with Obol GNO.
+
+At the end of the testnet period, all validators will be exited, and their performance will be judged to decide the incentivisation they will recieve.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community
+
+**State:** MVP
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** Q4 2022
+
+**Duration:** 6 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Broad community participation
+* First Obol Incentivised Testnet
+* Distributed Validator returns competetive versus single validator clients
+* Run an unreasonably large percentage of an incentivised test network to see the network performance at scale if a majority of validators moved to DV architectures
+
+### Demeter Red/Blue Net
+
+The final planned testnet before a prospective look at mainnet deployment is a testnet that takes inspiration for the Cyber Security industry and makes use of Red Teams and Blue Teams.
+
+In Cyber Security, Red team are on offense, Blue team are on defence. In Obol's case, Operators will be grouped into clusters based on application and are assigned to either red team or blue team in secret. Once the validators are active, it will be the red teamers goal to disrupt the cluster to the best of their ability, and their rewards will be based on how much worse the cluster performs than optimal.
+
+The blue team members will aim to keep their cluster online and signing. If they can keep their distributed validator online for the majority of time despite the red teams best efforts, they will recieve an outsized reward versus the red team reward.
+
+The aim of this testnet is to show that even with directly incentivised byzantine actors, that a distributed validator client can remain online and timely in it's validation, further cementing trust in the clients mainnet readiness.
+
+**Participants:** Obol Proto Community, Gnosis Community, Ethereum Staking Community, Immunefi Bug Bounty searchers
+
+**State:** Mainnet ready
+
+**Network:** Merged Gnosis Chain
+
+**Target start date:** Q4 2022
+
+**Duration:** 4 weeks
+
+**Network:** Merged Gnosis Chain
+
+**Goals:**
+
+* Even with incentivised byzantine actors, distributed validators can reliably stay online
+* Charon nodes cannot be DoS'd
+* Demonstrate that fault tolerant validation is real, safe and cost competetive.
+* Charon is feature complete and ready for audit
diff --git a/docs/versioned_docs/version-v1.0.0/README.md b/docs/versioned_docs/version-v1.0.0/README.md
new file mode 100644
index 0000000000..c87a7c5c9b
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/README.md
@@ -0,0 +1,2 @@
+# version-v1.0.0
+
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/README.md b/docs/versioned_docs/version-v1.0.0/advanced/README.md
new file mode 100644
index 0000000000..965416d689
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/README.md
@@ -0,0 +1,2 @@
+# advanced
+
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/adv-docker-configs.md b/docs/versioned_docs/version-v1.0.0/advanced/adv-docker-configs.md
new file mode 100644
index 0000000000..2c5a0ebd63
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/adv-docker-configs.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 12
+description: Use advanced docker-compose features to have more flexibility and power to change the default configuration.
+---
+
+# Advanced Docker Configs
+
+:::info
+This section is intended for *docker power users*, i.e.: for those who are familiar with working with `docker compose` and want to have more flexibility and power to change the default configuration.
+:::
+
+We use the "Multiple Compose File" feature which provides a very powerful way to override any configuration in `docker-compose.yml` without needing to modify git-checked-in files since that results in conflicts when upgrading this repo.
+See [this](https://docs.docker.com/compose/extends/#multiple-compose-files) for more details.
+
+There are some additional compose files in [this repository](https://github.com/ObolNetwork/charon-distributed-validator-node/), `compose-debug.yml` and `docker-compose.override.yml.sample`, along-with the default `docker-compose.yml` file that you can use for this purpose.
+
+- `compose-debug.yml` contains some additional containers that developers can use for debugging, like `jaeger`. To achieve this, you can run:
+
+```shell
+docker compose -f docker-compose.yml -f compose-debug.yml up
+```
+
+- `docker-compose.override.yml.sample` is intended to override the default configuration provided in `docker-compose.yml`. This is useful when, for example, you wish to add port mappings or want to disable a container.
+
+- To use it, just copy the sample file to `docker-compose.override.yml` and customise it to your liking. Please create this file ONLY when you want to tweak something. This is because the default override file is empty and docker errors if you provide an empty compose file.
+
+```shell
+cp docker-compose.override.yml.sample docker-compose.override.yml
+
+# Tweak docker-compose.override.yml and then run docker compose up
+docker compose up
+```
+
+- You can also run all these compose files together. This is desirable when you want to use both the features. For example, you may want to have some debugging containers AND also want to override some defaults. To achieve this, you can run:
+
+```shell
+docker compose -f docker-compose.yml -f docker-compose.override.yml -f compose-debug.yml up
+```
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/deployment-best-practices.md b/docs/versioned_docs/version-v1.0.0/advanced/deployment-best-practices.md
new file mode 100644
index 0000000000..0e8c77de6b
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/deployment-best-practices.md
@@ -0,0 +1,94 @@
+---
+sidebar_position: 11
+description: >-
+ DV Deployment best practices, for running an optimal Distributed Validator
+ setup at scale.
+---
+
+# Deployment Best Practices
+
+The following are a selection of best practices for deploying Distributed Validator Clusters at scale on mainnet.
+
+## Hardware Specifications
+
+The following specifications are recommended for bare metal machines for clusters intending to run a significant number of mainnet validators:
+
+### Minimum Specs
+
+* A CPU with 4+ cores, favouring high clock speed over more cores. ( >3.0GHz and higher or a cpubenchmark [single thread](https://www.cpubenchmark.net/singleThread.html) score of >2,500)
+* 16GB of RAM
+* 2TB+ free SSD disk space (for mainnet)
+* 10mb/s internet bandwidth
+
+### Recommended Specs for extremely large clusters
+
+* A CPU with 8+ physical cores, with clock speeds >3.5Ghz
+* 32GB+ RAM (depending on the EL+CL clients)
+* 4TB+ NVMe storage
+* 25mb/s internet bandwidth
+
+An NVMe storage device is **highly recommended for optimal performance**, offering nearly 10x more random read/writes per second than a standard SSD.
+
+Inadequate hardware (low-performance virtualized servers and/or slow HDD storage) has been observed to hinder performance, indicating the necessity of provisioning adequate resources. **CPU clock speed and Disk throughput+latency are the most important factors for running a performant validator.**
+
+Note that the Charon client itself takes less than 1GB of RAM and minimal CPU load. In order to optimize both performance and cost-effectiveness, it is recommended to prioritize physical over virtualized setups. Such configurations typically offer greater performance and minimize overhead associated with virtualization, contributing to improved efficiency and reliability.
+
+When constructing a DV cluster, it is important to be conscious of whether a cluster runs across cloud providers or stays within a single provider's private networking. This likely can impact the bandwidth and latency of the connections between nodes, as well as the egress costs of the cluster (Charon has a relatively low communication with its peers, averaging 10s of kb/s in large mainnet clusters). Ideally, bare metal machines in different locations within the same continent and with at least two providers, balances redundancy and performance.
+
+## Intra-cluster Latency
+
+It is recommended to **keep peer ping latency below 235 milliseconds for all peers in a cluster**. Charon should report a consensus duration averaging under 1 second through its prometheus metric `core_consensus_duration_seconds_bucket` and associated grafana panel titled "Consensus Duration".
+
+In cases where latencies exceed these thresholds, efforts should be made to reduce the physical distance between nodes or optimize Internet Service Provider (ISP) settings accordingly. Ensure all nodes are connecting to one another directly rather than through a relay.
+
+For high-scale, performance deployments; inter-peer latency of < 25ms is optimal, along with an average consensus duration under 100ms.
+
+## Node Locations
+
+For optimal performance and high availability, it is recommended to provision machines or virtual machines (VMs) within the same continent. This practice helps minimize potential latency issues ensuring efficient communication and responsiveness. Consider maps of [undersea internet cables](https://www.submarinecablemap.com/) when selecting locations across oceans with low latency.
+
+## Peer Connections
+
+Charon clients can establish connections with one another in two ways: either through a third publicly accessible server known as [a relay](../charon/charon-cli-reference.md#host-a-relay) or directly with one another if they can establish a connection. The former is known as a relay connection and the latter is known as a direct connection.
+
+It is important that all nodes in a cluster be directly connected to one another - this can halve the latency between them and reduces bandwidth constraints significantly. Opening Charon’s p2p port (the default is `3610`) to the Internet, or configuring your routers NAT gateway to permit connections to your Charon client, are what are required to facilitate a direct connection between clients.
+
+## Instance Independence
+
+Each node in the cluster should have its own independent beacon node (EL+CL) and validator client as well as Charon client. Sharing beacon nodes between the different nodes would potentially impact the fault tolerance of the cluster and as a result should be avoided.
+
+## Placement of Charon clients
+
+If you wish to divide a Distributed Validator node across multiple physical or virtual machines; locaate the Charon client on the EL/CL machine instead of the VC machine. This setup reduces latency from Charon to the consensus layer, as well as keeping the public-internet connected clients separate from the clients that hold the validator private keys. Be sure to use encrypted communication between your VC and the Charon client, potentially through a cloud-provided network, a self-managed network tunnel, a VPN, a Kubernetes [CNI](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/), or other manner.
+
+## Node Configuration
+
+Cluster sizes that allow for Byzantine Fault Tolerance are recommended as they are safer than clusters with simply Crash Fault Tolerance (See this guide for reference - [Cluster Size and Resilience](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/charon/cluster-configuration/README.md#cluster-size-and-resilience)).
+
+## MEV-Boost Relays
+
+MEV relays are configured at the Consensus Layer or MEV-boost client level. Refer to our [guide](quickstart-builder-api.md) to ensure all necessary configuration has been applied to your clients. As with all validators, low latency during proposal opportunities is extremely important. By default, MEV-Boost waits for all configured relays to return a bid, or will timeout if any have not returned a bid within 950ms. This default timeout is generally too slow for a distributed cluster (think of this time as additive to the time it takes the cluster to come to consensus, both of which need to happen within a 2 second window for optimal proposal broadcasting). It is likely better to only list relays that are located geographically near your node, so that once all relays respond (e.g. in < 50ms) your cluster will move forward with the proposal.
+
+## Client Diversity
+
+The clusters should consist of a combination of your preferred consensus, execution, and validator clients. It is recommended to include a combination of multiple clients in order to have a healthy client diversity within the cluster, ideally, if any single client type fails, it should be less than the fault tolerance of the cluster, and the validators should stay online/not do anything slashable.
+
+Remote signers can be included as well, such as Web3signer or Dirk. A diversity of private key infrastructure setups further reduces the risk of total key compromise.
+
+Tested client combinations can be found in the [release notes](https://github.com/ObolNetwork/charon/releases) for each Charon version.
+
+## Metrics Monitoring
+
+As requested by Obol Labs, node operators can push [standard monitoring](obol-monitoring.md) (Prometheus) and logging (Loki) data to Obol Labs' core team's cloud infrastructure for in-depth analysis of performance data and to assist during potential issues that may arise. Our recommendation for operators is to independently store information on their node health over the course of the validator lifecycle as well as any information on validator performance that they collect during the normal life cycle of a validator.
+
+## Obol Splits
+
+Leveraging [Obol Splits](../sc/introducing-obol-splits.md) smart contracts allows for non-custodial fund handling and allows for net customer payouts in an ongoing manner. Obol Splits ensure no commingling of funds across customers, and maintain full non-custodial integrity. Read more about Obol Splits [here](../faq/general.md#obol-splits).
+
+## Deposit Process
+
+Deposit processes can be done via an automated script. This can be used for DV clusters until they reach the desired number of validators.
+
+It is important to allow time for the validators to be activated (usually < 24 hours).
+
+Consider using batching smart contracts to reduce the gas cost of a script, but take caution in their integration not to make an invalid deposit.
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/monitoring.md b/docs/versioned_docs/version-v1.0.0/advanced/monitoring.md
new file mode 100644
index 0000000000..345fc76c28
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/monitoring.md
@@ -0,0 +1,90 @@
+---
+sidebar_position: 2
+description: >-
+ Add monitoring credentials to help the Obol Team monitor the health of your
+ cluster
+---
+
+# Monitoring your Node
+
+This comprehensive guide will assist you in effectively monitoring your Charon clusters and setting up alerts by running your own Prometheus and Grafana server. If you want to use Obol’s [public dashboard](https://grafana.monitoring.gcp.obol.tech/d/d895e47a-3c2d-46b7-9b15-8f31202681af/clusters-aggregate-view?orgId=6) instead of running your servers, refer to [this section](obol-monitoring.md) in Obol docs that teaches you how to push Prometheus metrics to Obol.
+
+To explain quickly, Prometheus generates the metrics and Grafana visualizes them. To learn more about Prometheus and Grafana, visit [here](https://grafana.com/docs/grafana/latest/getting-started/get-started-grafana-prometheus/). If you are using [**CDVN repository**](https://github.com/ObolNetwork/charon-distributed-validator-node) or [**CDVC repository**](https://github.com/ObolNetwork/charon-distributed-validator-cluster), then Prometheus and Grafana are part of docker compose file and will be installed when you run `docker compose up`.
+
+The local Grafana server will have a few pre-built dashboards:
+
+1. Charon Overview
+
+ This is the main dashboard that provides all the relavant details about the Charon node, for example - peer connectivity, duty completion, health of beacon node and downstream validator, etc. To open, navigate to `charon-distributed-validator-node` directory and open the following `uri` in the browser `http://localhost:3000/d/d6qujIJVk/`.
+2. Single Charon Node Dashboard (deprecated)
+
+ This is an older dashboard Charon node monitoring which is now deprecated. If you are still using it, we would highly recommend to move to Charon Overview for most up to date panels.
+3. Charon Log Dashboard
+
+ This dashboard can be used to query the logs emitted while running your Charon node. It utilises [Grafana Loki](https://grafana.com/oss/loki/). This dashboard is not active by default and should only be used in debug mode. Refer to [advanced docker config](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/advanced/adv-docker-configs/README.md) section on how to set up a debug mode.
+
+| Alert Name | Description | Troubleshoot |
+| ------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| ClusterBeaconNodeDown | This alert is activated when the beacon node in a the cluster is offline. The beacon node is crucial for validating transactions and producing new blocks. Its unavailability could disrupt the overall functionality of the cluster. | Most likely data is corrupted. Wipe data from the point you know data was corrupted and restart beacon node so it can be synced again. |
+| ClusterMissedAttestations | This alert indicates that there have been missed attestations in the cluster. Missed attestations may suggest that validators are not operating correctly, compromising the security and efficiency of the cluster. | This alert is triggered when 3 attestation are missed in 2 minutes. Check if the minimum threshold of peers are online. If correct, check for beacon node API errors and downstream validator errors using Loki. Lastly, debug from Docker using `docker compose debug`. |
+| ClusterInUnknownStatus | This alert is designed to activate when a node within the cluster is detected to be in an unknown state. The condition is evaluated by checking whether the maximum of the `app_monitoring_readyz` metric is 0. | This is most likely a bug in Charon. Report to us via [Discord](https://discord.com/channels/849256203614945310/970759460693901362). |
+| ClusterInsufficientPeers | This alert is set to activate when the number of peers for a node in the cluster is insufficient. The condition is evaluated by checking whether the maximum of the `app_monitoring_readyz` equals 4. | If you are running group cluster, check with other peers to troubleshoot the issue. If you are running solo cluster, look into other machines running the DVs to find the problem. |
+| ClusterFailureRate | This alert is activated when the failure rate of the cluster exceeds a certain threshold, more specifically - more than 5% failures in duties in the last 6 hours. | Check the upstream and downstream dependencies, latency and hardware issues. |
+| ClusterVCMissingValidators | This alert is activated if any validators in the cluster are missing. This happens when validator client cannot load validator keys in the past 10 minutes. | Find if validator keys are missing and load them. |
+| ClusterHighPctFailedSyncMsgDuty | This alert is activated if a high percentage of sync message duties failed in the cluster. The alert is activated if the sum of the increase in failed duties tagged with "sync\_message" in the last hour divided by the sum of the increase in total duties tagged with "sync\_message" in the last hour is greater than 10%. | This may be due to limitations in beacon node performance on nodes within the cluster. In charon, this duty is the most demanding, however, an increased failure rate does not impact rewards. |
+| ClusterNumConnectedRelays | This alert is activated if the number of connected relays in the cluster falls to 0. | Make sure correct relay is configured. If you still get the error report to us via [Discord](https://discord.com/channels/849256203614945310/970759460693901362). |
+| PeerPingLatency | This alert is activated if the 90th percentile of the ping latency to the peers in a cluster exceeds 400ms within 2 minutes. | Make sure to set up stable and high speed internet connection. If you have geographically distributed nodes, make sure latency does not go over 250 ms. |
+| ClusterBeaconNodeZeroPeers | This alert is activated when beacon node cannot find peers. | Go to docs of beacon node client to troubleshoot. Make sure there is no port overlap and p2p discovery is open. |
+
+## Setting Up a Contact Point
+
+When alerts are triggered, they are routed to contact points according notification policies. For this, contact points must be added. Grafana supports several kind of contact points like email, PagerDuty, Discord, Slack, Telegram etc. This document will teach how to add Discord channel as contact point.
+
+1. On left nav bar in Grafana console, under `Alerts` section, click on contact points.
+2. Click on `+ Add contact point`. It will show following page. Choose Discord in the `Integration` drop down.
+
+ 
+3. Give a descriptive name to the alert. Create a channel in Discord and copy its `webhook url`. Once done, click `Save contact point` to finish.
+4. When the alerts are fired, it will send without filling in the variables for cluster detail. For example, `cluster_hash` variable is missing here `cluster_hash = {{.cluster_hash}}`. This is done to save disk space. To find the details, use `docker compose -f docker-compose.yml -f compose-debug.yml up`. More description [here](https://docs.obol.tech/docs/advanced/adv-docker-configs).
+
+## Best Practices for Monitoring Charon Nodes & Cluster
+
+* **Establish Baselines**: Familiarize yourself with the normal operation metrics like CPU, memory, and network usage. This will help you detect anomalies.
+* **Define Key Metrics**: Set up alerts for essential metrics, encompassing both system-level and Charon-specific ones.
+* **Configure Alerts**: Based on these metrics, set up actionable alerts.
+* **Monitor Network**: Regularly assess the connectivity between nodes and the network.
+* **Perform Regular Health Checks**: Consistently evaluate the status of your nodes and clusters.
+* **Monitor System Logs**: Keep an eye on logs for error messages or unusual activities.
+* **Assess Resource Usage**: Ensure your nodes are neither over- nor under-utilized.
+* **Automate Monitoring**: Use automation to ensure no issues go undetected.
+* **Conduct Drills**: Regularly simulate failure scenarios to fine-tune your setup.
+* **Update Regularly**: Keep your nodes and clusters updated with the latest software versions.
+
+## Third-Party Services for Uptime Testing
+
+* [updown.io](https://updown.io/)
+* [Grafana synthetic Monitoring](https://grafana.com/grafana/plugins/grafana-synthetic-monitoring-app/)
+
+## Key metrics to watch to verify node health based on jobs
+
+**CPU Usage**: High or spiking CPU usage can be a sign of a process demanding more resources than it should.
+
+**Memory Usage**: If a node is consistently running out of memory, it could be due to a memory leak or simply under-provisioning.
+
+**Disk I/O**: Slow disk operations can cause applications to hang or delay responses. High disk I/O can indicate storage performance issues or a sign of high load on the system.
+
+**Network Usage**: High network traffic or packet loss can signal network configuration issues, or that a service is being overwhelmed by requests.
+
+**Disk Space**: Running out of disk space can lead to application errors and data loss.
+
+**Uptime**: The amount of time a system has been up without any restarts. Frequent restarts can indicate instability in the system.
+
+**Error Rates**: The number of errors encountered by your application. This could be 4xx/5xx HTTP errors, exceptions, or any other kind of error your application may log.
+
+**Latency**: The delay before a transfer of data begins following an instruction for its transfer.
+
+It is also important to check:
+
+* NTP clock skew;
+* Process restarts and failures (eg. through `node_systemd`);
+* Alert on high error and panic log counts.
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/obol-monitoring.md b/docs/versioned_docs/version-v1.0.0/advanced/obol-monitoring.md
new file mode 100644
index 0000000000..fe7d496a78
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/obol-monitoring.md
@@ -0,0 +1,49 @@
+---
+sidebar_position: 3
+description: Add monitoring credentials to help the Obol Team monitor the health of your cluster
+---
+
+# Push Metrics to Obol Monitoring
+
+:::info
+This is **optional** and does not confer any special privileges within the Obol Network.
+:::
+
+You may have been provided with **Monitoring Credentials** used to push distributed validator metrics to Obol's central Prometheus cluster to monitor, analyze, and improve your Distributed Validator Cluster's performance.
+
+The provided credentials needs to be added in `prometheus/prometheus.yml` replacing `$PROM_REMOTE_WRITE_TOKEN` and will look like:
+
+```shell
+obol20tnt8UC...
+```
+
+The updated `prometheus/prometheus.yml` file should look like:
+
+```yaml
+global:
+ scrape_interval: 30s # Set the scrape interval to every 30 seconds.
+ evaluation_interval: 30s # Evaluate rules every 30 seconds.
+
+remote_write:
+ - url: https://vm.monitoring.gcp.obol.tech/write
+ authorization:
+ credentials: obol20tnt8UC-your-credential-here...
+ write_relabel_configs:
+ - source_labels: [job]
+ regex: "charon"
+ action: keep # Keeps charon metrics and drop metrics from other containers.
+
+scrape_configs:
+ - job_name: "nethermind"
+ static_configs:
+ - targets: ["nethermind:8008"]
+ - job_name: "lighthouse"
+ static_configs:
+ - targets: ["lighthouse:5054"]
+ - job_name: "charon"
+ static_configs:
+ - targets: ["charon:3620"]
+ - job_name: "lodestar"
+ static_configs:
+ - targets: [ "lodestar:5064" ]
+```
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/quickstart-builder-api.md b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-builder-api.md
new file mode 100644
index 0000000000..0ce80f7762
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-builder-api.md
@@ -0,0 +1,165 @@
+---
+sidebar_position: 1
+description: Run a distributed validator cluster with the builder API (MEV-Boost)
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Enable MEV
+
+This quickstart guide focuses on configuring the builder API for Charon and supported validator and consensus clients.
+
+## Getting started with Charon & the Builder API
+
+Running a distributed validator cluster with the builder API enabled will give the validators in the cluster access to the builder network. This builder network is a network of "Block Builders"
+who work with MEV searchers to produce the most valuable blocks a validator can propose.
+
+[MEV-Boost](https://boost.flashbots.net/) is one such product from Flashbots that enables you to ask multiple
+block relays (who communicate with the "Block Builders") for blocks to propose. The block that pays the largest reward to the validator will be signed and returned to the relay for broadcasting to the wider
+network. The end result for the validator is generally an increased APR as they receive some share of the MEV.
+
+:::info
+Before completing this guide, please check your cluster version, which can be found inside the `cluster-lock.json` file. If you are using cluster-lock version `1.7.0` or higher, Charon seamlessly accommodates all validator client implementations within a MEV-enabled distributed validator cluster.
+
+For clusters with a `cluster-lock.json` version `1.6.0` and below, Charon is compatible only with [Teku](https://github.com/ConsenSys/teku). Use the version history feature of this documentation to see the instructions for configuring a cluster in that manner (`v0.16.0`).
+:::
+
+## Client configuration
+
+:::note
+You need to add CLI flags to your consensus client, Charon client, and validator client, to enable the builder API.
+
+You need all operators in the cluster to have their nodes properly configured to use the builder API, or you risk missing a proposal.
+:::
+
+### Charon
+
+Charon supports builder API with the `--builder-api` flag. To use builder API, one simply needs to add this flag to the `charon run` command:
+
+```shell
+charon run --builder-api
+```
+
+### Consensus Clients
+
+The following flags need to be configured on your chosen consensus client. A Flashbots relay URL is provided for example purposes, you should choose a relay that suits your preferences from [this list](https://github.com/eth-educators/ethstaker-guides/blob/main/MEV-relay-list.md#mev-relay-list-for-mainnet).
+
+
+
+ Teku can communicate with a single relay directly:
+
+
+ {String.raw`teku --builder-endpoint="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`teku --builder-endpoint=http://mev-boost:18550`}
+
+
+
+
+ Lighthouse can communicate with a single relay directly:
+
+
+ {String.raw`lighthouse bn --builder "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ Or you can configure it to communicate with a local MEV-boost sidecar to configure multiple relays:
+
+
+ {String.raw`lighthouse bn --builder "http://mev-boost:18550"`}
+
+
+
+
+ Prysm can communicate with a single relay directly:
+
+
+ {String.raw`prysm beacon-chain --http-mev-relay "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+ Nimbus can communicate with a single relay directly:
+
+
+ {String.raw`nimbus_beacon_node \
+ --payload-builder=true \
+ --payload-builder-url="https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+ You should also consider adding --local-block-value-boost 3
as a flag, to favour locally built blocks if they are withing 3% in value of the relay block, to improve the chances of a successful proposal.
+
+
+ Lodestar can communicate with a single relay directly:
+
+
+ {String.raw`node ./lodestar --builder --builder.urls "https://0xac6e77dfe25ecd6110b8e780608cce0dab71fdd5ebea22a16c0205200f2f8e2e3ad3b71d3499c54ad14d6c21b41a37ae@boost-relay.flashbots.net"`}
+
+
+
+
+
+### Validator Clients
+
+The following flags need to be configured on your chosen validator client
+
+
+
+
+
+ {String.raw`teku validator-client --validators-builder-registration-default-enabled=true`}
+
+
+
+
+
+
+
+ {String.raw`lighthouse vc --builder-proposals`}
+
+
+
+
+
+
+ {String.raw`prysm validator --enable-builder`}
+
+
+
+
+
+
+ {String.raw`nimbus_validator_client --payload-builder=true`}
+
+
+
+
+
+
+ {String.raw`node ./lodestar validator --builder="true" --builder.selection="builderonly"`}
+
+
+
+
+
+## Verify your cluster is correctly configured
+
+It can be difficult to confirm everything is configured correctly with your cluster until a proposal opportunity arrives, but here are some things you can check.
+
+When your cluster is running, you should see if Charon is logging something like this each epoch:
+
+```log
+13:10:47.094 INFO bcast Successfully submitted validator registration to beacon node {"delay": "24913h10m12.094667699s", "pubkey": "84b_713", "duty": "1/builder_registration"}
+```
+
+This indicates that your Charon node is successfully registering with the relay for a blinded block when the time comes.
+
+If you are using the [ultrasound relay](https://relay.ultrasound.money), you can enter your cluster's distributed validator public key(s) into their website, to confirm they also see the validator as correctly registered.
+
+You should check that your validator client's logs look healthy, and ensure that you haven't added a `fee-recipient` address that conflicts with what has been selected by your cluster in your cluster-lock file, as that may prevent your validator from producing a signature for the block when the opportunity arises. You should also confirm the same for all of the other peers in your cluster.
+
+Once a proposal has been made, you should look at the `Block Extra Data` field under `Execution Payload` for the block on [Beaconcha.in](https://beaconcha.in/block/18450364), and confirm there is text present, this generally suggests the block came from a builder, and was not a locally constructed block.
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/quickstart-combine.md b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-combine.md
new file mode 100644
index 0000000000..bd6795e1ed
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-combine.md
@@ -0,0 +1,112 @@
+---
+sidebar_position: 8
+description: Combine distributed validator private key shares to recover the validator private key.
+---
+
+# Combine DV private key shares
+
+:::warning
+Reconstituting Distributed Validator private key shares into a standard validator private key is a security risk, and can potentially cause your validator to be slashed.
+
+Only combine private keys as a last resort and do so with extreme caution.
+:::
+
+Combine distributed validator private key shares into an Ethereum validator private key.
+
+## Pre-requisites
+
+- Ensure you have the `.charon` directories of at least a threshold of the cluster's node operators.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Set up the key combination directory tree
+
+Rename each cluster node operator `.charon` directory in a different way to avoid folder name conflicts.
+
+We suggest naming them clearly and distinctly, to avoid confusion.
+
+At the end of this process, you should have a tree like this:
+
+```shell
+$ tree ./cluster
+
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+...
+└── nodeN
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+:::warning
+Make sure to never mix the various `.charon` directories with one another.
+
+Doing so can potentially cause the combination process to fail.
+:::
+
+## Step 2. Combine the key shares
+
+Run the following command:
+
+```shell
+# Combine a clusters private keys
+docker run --rm -v "$(pwd):/opt/charon" obolnetwork/charon:v1.0.0 combine --cluster-dir /opt/charon/cluster --output-dir /opt/charon/combined
+```
+
+This command will store the combined keys in the `output-dir`, in this case a folder named `combined`.
+
+```shell
+$ tree combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+
+We can verify that the directory names are correct by looking at the lock file:
+
+```shell
+$ jq .distributed_validators[].distributed_public_key cluster/node0/cluster-lock.json
+"0x822c5310674f4fc4ec595642d0eab73d01c62b588f467da6f98564f292a975a0ac4c3a10f1b3a00ccc166a28093c2dcd"
+"0x8929b4c8af2d2eb222d377cac2aa7be950e71d2b247507d19b5fdec838f0fb045ea8910075f191fd468da4be29690106"
+```
+
+:::info
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/ercs/blob/master/ERCS/erc-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+Ensure your distributed validator cluster is completely shut down before starting a replacement validator or you are likely to be slashed.
+:::
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/quickstart-eigenpod.md b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-eigenpod.md
new file mode 100644
index 0000000000..25551bdb24
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-eigenpod.md
@@ -0,0 +1,53 @@
+---
+sidebar_position: 4
+description: Create an EigenLayer Distributed Validator to enable distributed restaking.
+---
+
+# quickstart-eigenpod
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Create an EigenLayer DV
+
+:::warning The Obol-SDK is in a beta state and should be used with caution. Ensure you validate all important data. :::
+
+This is a walkthrough of creating a distributed validator cluster pointing to an [EigenLayer](https://eigenlayer.xyz/) [EigenPod](https://docs.eigenlayer.xyz/eigenlayer/restaking-guides/restaking-user-guide/native-restaking/create-eigenpod-and-set-withdrawal-credentials/), using the [DV Launchpad](../dvl/intro.md) and other applications.
+
+### Pre-requisites
+
+* The Ethereum addresses or ENS names for the node operators in the cluster. (Currently the DV Launchpad only supports Metamask or equivalent injected web3 browser wallets.)
+* If creating more than one validator, the ability to use the [obol-sdk](quickstart-sdk.md) is required.
+
+### Create a SAFE to own the EigenPod
+
+Deploy a [SAFE](https://app.safe.global/) with the addresses of the node operators as signers. A reasonable signing threshold is the same as a cluster (>2/3rds) but use good judgement if a different threshold or signer set suits your use case. The principal ether for these validators will be returned to this address.
+
+### Create an EigenPod
+
+Select the "Create EigenPod" option on the [EigenLayer App](https://app.eigenlayer.xyz/)'s 'Restake' page, using the created SAFE account via WalletConnect. Note the EigenPod's address.
+
+### Create a Splitter for the block reward
+
+Create a Splitter on [splits.org](https://app.splits.org/), to divide the block reward and MEV amongst the operators. Note the split's address.
+
+:::tip To be recognised as a part of Obol's [1% for Decentralisation](https://blog.obol.tech/1-percent-for-decentralisation/) campaign, you must contribute 3% of execution layer rewards by setting [this address](https://etherscan.io/address/0xDe5aE4De36c966747Ea7DF13BD9589642e2B1D0d) as a recipient on your split. Upcoming Obol EigenPods will support contributing 1% of total rewards instead of 3% of only execution rewards. :::
+
+### Create the DV cluster invite
+
+With these contracts deployed, you can now create the DV cluster invitation to send to Node Operators, this can be done through the DV Launchpad or the Obol SDK.
+
+* Use the "Create a cluster with a group" [flow](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/start/quickstart_group/README.md) on the [DV Launchpad](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/dvl/intro/README.md).
+* Choose a cluster name and invite your operator's addresses.
+* When setting the withdrawal credentials, select "Custom".
+* For "Withdrawal Address", set the EigenPod contract address.
+* For "Fee Recipient", set the Split contract address.
+* Continue the process of creating a cluster normally, share the invitation link with the operators and have them complete the Distributed Key Generation ceremony.
+* If you are creating a cluster with more than one validator, you will need to craft the cluster invitation with the [SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk).
+* Follow the [Create a cluster using the SDK](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/advanced/quickstart-sdk/README.md) guide.
+* For `withdrawal_address`, set the EigenPod contract address.
+* For `fee_recipient_address`, set the Split contract address.
+* Continue the process of creating the cluster as per the guide, share the invitation link with the operators and have them complete the Distributed Key Generation ceremony.
+
+### Deposit and restake your Distributed Validator
+
+Once you have completed the DKG ceremony, you can continue the flow on the EigenLayer app to activate these validators and restake them. Consult the EigenLayer [documentation](https://docs.eigenlayer.xyz/eigenlayer/restaking-guides/restaking-user-guide/native-restaking/create-eigenpod-and-set-withdrawal-credentials/enable-restaking) to continue the process.
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/quickstart-sdk.md b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-sdk.md
new file mode 100644
index 0000000000..c1cbb317cc
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-sdk.md
@@ -0,0 +1,133 @@
+---
+sidebar_position: 3
+description: Create a DV cluster using the Obol Typescript SDK
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Create a DV using the SDK
+
+:::warning
+The Obol-SDK is in a beta state and should be used with caution on testnets only.
+:::
+
+This is a walkthrough of using the [Obol-SDK](https://www.npmjs.com/package/@obolnetwork/obol-sdk) to propose a four-node distributed validator cluster for creation using the [DV Launchpad](../dvl/intro.md).
+
+## Pre-requisites
+
+- You have [node.js](https://nodejs.org/en) installed.
+
+## Install the package
+
+Install the Obol-SDK package into your development environment
+
+
+
+
+ npm install --save @obolnetwork/obol-sdk
+
+
+
+
+ yarn add @obolnetwork/obol-sdk
+
+
+
+
+## Instantiate the client
+
+The first thing you need to do is create an instance of the Obol SDK client. The client takes two constructor parameters:
+
+- The `chainID` for the chain you intend to use.
+- An ethers.js [signer](https://docs.ethers.org/v6/api/providers/#Signer-signTypedData) object.
+
+```ts
+import { Client } from "@obolnetwork/obol-sdk";
+import { ethers } from "ethers";
+
+// Create a dummy ethers signer object with a throwaway private key
+const mnemonic = ethers.Wallet.createRandom().mnemonic?.phrase || "";
+const privateKey = ethers.Wallet.fromPhrase(mnemonic).privateKey;
+const wallet = new ethers.Wallet(privateKey);
+const signer = wallet.connect(null);
+
+// Instantiate the Obol Client for goerli
+const obol = new Client({ chainId: 5 }, signer);
+```
+
+## Propose the cluster
+
+List the Ethereum addresses of participating operators, along with withdrawal and fee recipient address data for each validator you intend for the operators to create.
+
+```ts
+// A config hash is a deterministic hash of the proposed DV cluster configuration
+const configHash = await obol.createClusterDefinition({
+ name: "SDK Demo Cluster",
+ operators: [
+ { address: "0xC35CfCd67b9C27345a54EDEcC1033F2284148c81" },
+ { address: "0x33807D6F1DCe44b9C599fFE03640762A6F08C496" },
+ { address: "0xc6e76F72Ea672FAe05C357157CfC37720F0aF26f" },
+ { address: "0x86B8145c98e5BD25BA722645b15eD65f024a87EC" },
+ ],
+ validators: [
+ {
+ fee_recipient_address: "0x3CD4958e76C317abcEA19faDd076348808424F99",
+ withdrawal_address: "0xE0C5ceA4D3869F156717C66E188Ae81C80914a6e",
+ },
+ ],
+});
+
+console.log(
+ `Direct the operators to https://goerli.launchpad.obol.tech/dv?configHash=${configHash} to complete the key generation process`
+);
+```
+
+## Invite the Operators to complete the DKG
+
+Once the Obol-API returns a `configHash` string from the `createClusterDefinition` method, you can use this identifier to invite the operators to the [Launchpad](../dvl/intro.md) to complete the process
+
+1. Operators navigate to `https://.launchpad.obol.tech/dv?configHash=` and complete the [run a DV with others](../start/quickstart_group.md) flow.
+1. Once the DKG is complete, and operators are using the `--publish` flag, the created cluster details will be posted to the Obol API.
+1. The creator will be able to retrieve this data with `obol.getClusterLock(configHash)`, to use for activating the newly created validator.
+
+## Retrieve the created Distributed Validators using the SDK
+
+Once the DKG is complete, the proposer of the cluster can retrieve key data such as the validator public keys and their associated deposit data messages.
+
+```js
+const clusterLock = await obol.getClusterLock(configHash);
+```
+
+Reference lock files can be found [here](https://github.com/ObolNetwork/charon/tree/main/cluster/testdata).
+
+## Activate the DVs using the deposit contract
+
+In order to activate the distributed validators, the cluster operator can retrieve the validators' associated deposit data from the lock file and use it to craft transactions to the `deposit()` method on the deposit contract.
+
+```js
+const validatorDepositData =
+ clusterLock.distributed_validators[validatorIndex].deposit_data;
+
+const depositContract = new ethers.Contract(
+ DEPOSIT_CONTRACT_ADDRESS, // 0x00000000219ab540356cBB839Cbe05303d7705Fa for Mainnet, 0xff50ed3d0ec03aC01D4C79aAd74928BFF48a7b2b for Goerli
+ depositContractABI, // https://etherscan.io/address/0x00000000219ab540356cBB839Cbe05303d7705Fa#code for Mainnet, and replace the address for Goerli
+ signer
+);
+
+const TX_VALUE = ethers.parseEther("32");
+
+const tx = await depositContract.deposit(
+ validatorDepositData.pubkey,
+ validatorDepositData.withdrawal_credentials,
+ validatorDepositData.signature,
+ validatorDepositData.deposit_data_root,
+ { value: TX_VALUE }
+);
+
+const txResult = await tx.wait();
+```
+
+## Usage Examples
+
+Examples of how our SDK can be used are found [here](https://github.com/ObolNetwork/obol-sdk-examples).
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/quickstart-split.md b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-split.md
new file mode 100644
index 0000000000..4d14daa79e
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/quickstart-split.md
@@ -0,0 +1,98 @@
+---
+sidebar_position: 7
+description: Split existing validator keys
+---
+
+# Split validator private keys
+
+:::warning
+This process should only be used if you want to split an *existing validator private key* into multiple private key shares for use in a Distributed Validator Cluster. If your existing validator is not properly shut down before the Distributed Validator starts, your validator may be slashed.
+
+If you are starting a new validator, you should follow a [quickstart guide](../start/quickstart_overview.md) instead.
+
+If you use MEV-Boost, make sure you turned off your MEV-Boost service for the time of splitting the keys, otherwise you may hit [this issue](https://github.com/ObolNetwork/charon/issues/2770).
+:::
+
+Split an existing Ethereum validator key into multiple key shares for use in an [Obol Distributed Validator Cluster](../int/key-concepts.md#distributed-validator-cluster).
+
+## Pre-requisites
+
+- Ensure you have the existing validator keystores (the ones to split) and passwords.
+- Ensure you have [docker](https://docs.docker.com/engine/install/) installed.
+- Make sure `docker` is running before executing the commands below.
+
+## Step 1. Clone the charon repo and copy existing keystore files
+
+Clone the [Charon](https://github.com/ObolNetwork/charon) repo.
+
+```shell
+ # Clone the repo
+ git clone https://github.com/ObolNetwork/charon.git
+
+ # Change directory
+ cd charon/
+
+ # Create a folder within this checked out repo
+ mkdir split_keys
+```
+
+Copy the existing validator `keystore.json` files into this new folder. Alongside them, with a matching filename but ending with `.txt` should be the password to the keystore (e.g.: `keystore-0.json`, `keystore-0.txt`).
+
+At the end of this process, you should have a tree like this:
+
+```shell
+├── split_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ ├── keystore-1.txt
+│ ...
+│ ├── keystore-N.json
+│ ├── keystore-N.txt
+```
+
+## Step 2. Split the keys using the charon docker command
+
+Run the following docker command to split the keys:
+
+```shell
+CHARON_VERSION= # E.g. v1.0.0
+CLUSTER_NAME= # The name of the cluster you want to create.
+WITHDRAWAL_ADDRESS= # The address you want to use for withdrawals (this is just for accuracy in your lock file, you can't change a withdrawal address for a validator that has already been deposited)
+FEE_RECIPIENT_ADDRESS= # The address you want to use for block reward and MEV payments.
+NODES= # The number of nodes in the cluster.
+
+docker run --rm -v $(pwd):/opt/charon obolnetwork/charon:${CHARON_VERSION} create cluster \
+ --name="${CLUSTER_NAME}" \
+ --withdrawal-addresses="${WITHDRAWAL_ADDRESS}" \
+ --fee-recipient-addresses="${FEE_RECIPIENT_ADDRESS}" \
+ --split-existing-keys \
+ --split-keys-dir=/opt/charon/split_keys \
+ --nodes ${NODES} \
+ --network mainnet
+```
+
+The above command will create `validator_keys` along with `cluster-lock.json` in `./cluster` for each node.
+
+Command output:
+
+```shell
+***************** WARNING: Splitting keys **********************
+ Please make sure any existing validator has been shut down for
+ at least 2 finalised epochs before starting the Charon cluster,
+ otherwise slashing could occur.
+****************************************************************
+
+Created Charon cluster:
+ --split-existing-keys=true
+
+./cluster/
+├─ node[0-*]/ # Directory for each node
+│ ├─ charon-enr-private-key # Charon networking private key for node authentication
+│ ├─ cluster-lock.json # Cluster lock defines the cluster lock file which is signed by all nodes
+│ ├─ validator_keys # Validator keystores and password
+│ │ ├─ keystore-*.json # Validator private share key for duty signing
+│ │ ├─ keystore-*.txt # Keystore password files for keystore-*.json
+```
+
+These split keys can now be used to start a Charon cluster.
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/self-relay.md b/docs/versioned_docs/version-v1.0.0/advanced/self-relay.md
new file mode 100644
index 0000000000..cf54a173b5
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/self-relay.md
@@ -0,0 +1,38 @@
+---
+sidebar_position: 9
+description: Self-host a relay
+---
+
+# Self-Host a Relay
+
+If you are experiencing connectivity issues with the Obol hosted relays, or you want to improve your clusters latency and decentralization, you can opt to host your own relay on a separate open and static internet port.
+
+```shell
+# Figure out your public IP
+curl v4.ident.me
+
+# Clone the repo and cd into it.
+git clone https://github.com/ObolNetwork/charon-distributed-validator-node.git
+
+cd charon-distributed-validator-node
+
+# Replace 'replace.with.public.ip.or.hostname' in relay/docker-compose.yml with your public IPv4 or DNS hostname
+
+nano relay/docker-compose.yml
+
+docker compose -f relay/docker-compose.yml up
+```
+
+Test whether the relay is publicly accessible. This should return an ENR:
+`curl http://replace.with.public.ip.or.hostname:3640/enr`
+
+Ensure the ENR returned by the relay contains the correct public IP and port by decoding it with [ENR viewer](https://enr-viewer.com/).
+
+Configure **ALL** charon nodes in your cluster to use this relay:
+
+- Either by adding a flag: `--p2p-relays=http://replace.with.public.ip.or.hostname:3640/enr`
+- Or by setting the environment variable: `CHARON_P2P_RELAYS=http://replace.with.public.ip.or.hostname:3640/enr`
+
+Note that a local `relay/.charon/charon-enr-private-key` file will be created next to `relay/docker-compose.yml` to ensure a persisted relay ENR across restarts.
+
+A list of publicly available relays that can be used is maintained [here](../faq/risks.md).
diff --git a/docs/versioned_docs/version-v1.0.0/advanced/test-command.md b/docs/versioned_docs/version-v1.0.0/advanced/test-command.md
new file mode 100644
index 0000000000..9dc76bfd5a
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/advanced/test-command.md
@@ -0,0 +1,68 @@
+---
+sidebar_position: 5
+description: Test the performance of a candidate Distributed Validator Cluster setup.
+---
+
+# test-command
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
+
+## Test a Cluster
+
+:::warning The `charon alpha test` command is in an alpha state and is subject to change until it is made available as `charon test` in a future version. :::
+
+Charon test commands are designed to help you evaluate the performance and readiness of your candidate cluster. It allows you to test your connection to other Charon peers, the performance of your beacon node(s), and the readiness of your validator client. It prints a performance report to the standard output (which can be omitted by with the `--quiet` flag) and a machine-readable TOML format of the report if the `--output-toml` flag is set.
+
+### Test your connection to peers
+
+Run tests towards other Charon peers to evaluate the effectiveness of a potential cluster setup. The command sets up a libp2p node, similarly to what Charon normally does. This test command **has to be running simultaneously with the other peers**. After the node is up it waits for other peers to get their nodes up and running, retrying the connection every 3 seconds. The libp2p node connects to relays (configurable with `p2p-relays` flag) and to other libp2p nodes via TCP. Other peer nodes are discoverable by using their ENRs. Note that for a peer to be successfully discovered, it needs to be connected to the same relay. After completion of the test suite the libp2p node stays alive (duration configurable with `keep-alive` flag) for other peers to continue testing against it. The node can be forcefully stopped as well.
+
+To be able to establish direct connection, you have to ensure:
+
+* Your machine is publicly accessible on the internet or at least a specific port is.
+* You add flag `p2p-tcp-address` (i.e.: `127.0.0.1:9001`) flag and the port specified in it is free and publicly accessible.
+* You add the flag `p2p-external-ip` (i.e.: `8.8.8.8`) and specify your public ip.
+
+If all points are satisfied by you and the other peers, you should be able to establish a direct TCP connection between each other. Note that a relay is still required, as it is used for peer discovery.
+
+#### Pre-requisites
+
+* [Create an ENR](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/charon/charon-cli-reference/README.md#creating-an-enr-for-charon).
+* Share your ENR with the other peers which will test with you.
+* Obtain the ENRs of the other peers with which you will test.
+
+#### Run
+
+```shell
+charon alpha test peers \
+ --enrs="enr:-HW4QNDXi9MzdH9Af65g20jDfelAJ0kJhclitkYYgFziYHXhRFF6JyB_CnVnimB7VxKBGBSkHbmy-Tu8BJq8JQkfptiAgmlkgnY0iXNlY3AyNTZrMaEDBVt5pk6x0A2fjth25pjLOEE9DpqCG-BCYyvutY04TZ,enr:-HW4QO2vefLueTBEUGly5hkcpL7NWdMKWx7Nuy9f7z6XZInCbFAc0IZj6bsnmj-Wi4ElS6jNa0Mge5Rkc2WGTVemas2AgmlkgnY0iXNlY3AyNTZrMaECR9SmYQ_1HRgJmNxvh_ER2Sxx78HgKKgKaOkCROYwaDY"
+```
+
+#### Run with Docker
+
+```shell
+docker run -v /Users/obol/charon/.charon:/opt/charon/.charon obolnetwork/charon:v1.0.0 alpha test peers \
+ --enrs="enr:-HW4QNDXi9MzdH9Af65g20jDfelAJ0kJhclitkYYgFziYHXhRFF6JyB_CnVnimB7VxKBGBSkHbmy-Tu8BJq8JQkfptiAgmlkgnY0iXNlY3AyNTZrMaEDBVt5pk6x0A2fjth25pjLOEE9DpqCG-BCYyvutY04TZs,enr:-HW4QO2vefLueTBEUGly5hkcpL7NWdMKWx7Nuy9f7z6XZInCbFAc0IZj6bsnmj-Wi4ElS6jNa0Mge5Rkc2WGTVemas2AgmlkgnY0iXNlY3AyNTZrMaECR9SmYQ_1HRgJmNxvh_ER2Sxx78HgKKgKaOkCROYwaDY"
+```
+
+### Test your beacon node
+
+Run tests towards your beacon node(s), to evaluate its effectiveness for a Distributed Validator cluster.
+
+#### Pre-requisites
+
+* Running beacon node(s) towards which tests will be executed.
+
+#### Run
+
+```shell
+charon alpha test beacon \
+ --endpoints="https://ethereum-holesky-beacon-api.publicnode.com,https://ethereum-sepolia-beacon-api.publicnode.com"
+```
+
+#### Run with Docker
+
+```shell
+docker run obolnetwork/charon:v1.0.0 alpha test beacon \
+ --endpoints="https://ethereum-holesky-beacon-api.publicnode.com,https://ethereum-sepolia-beacon-api.publicnode.com"
+```
diff --git a/docs/versioned_docs/version-v1.0.0/cf/README.md b/docs/versioned_docs/version-v1.0.0/cf/README.md
new file mode 100644
index 0000000000..5e4947f1b9
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/cf/README.md
@@ -0,0 +1,2 @@
+# cf
+
diff --git a/docs/versioned_docs/version-v1.0.0/cf/bug-report.md b/docs/versioned_docs/version-v1.0.0/cf/bug-report.md
new file mode 100644
index 0000000000..00afb3a516
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/cf/bug-report.md
@@ -0,0 +1,55 @@
+# Filing a bug report
+
+Bug reports are critical to the rapid development of Obol. In order to make the process quick and efficient for all parties, it is best to follow some common reporting etiquette when filing to avoid double issues or miscommunications.
+
+## Checking if your issue exists
+
+Duplicate tickets are a hindrance to the development process and, as such, it is crucial to first check through Charon's existing issues to see if what you are experiencing has already been indexed.
+
+To do so, head over to the [issue page](https://github.com/ObolNetwork/charon/issues) and enter some related keywords into the search bar. This may include a sample from the output or specific components it affects.
+
+If searches have shown the issue in question has not been reported yet, feel free to open up a new issue ticket.
+
+## Writing quality bug reports
+
+A good bug report is structured to help the developers and contributors visualize the issue in the clearest way possible. It's important to be concise and use comprehensive language, while also providing all relevant information on-hand. Use short and accurate sentences without any unnecessary additions, and include all existing specifications with a list of steps to reproduce the expected problem. Issues that cannot be reproduced **cannot be solved**.
+
+If you are experiencing multiple issues, it is best to open each as a separate ticket. This allows them to be closed individually as they are resolved.
+
+An original bug report will very likely be preserved and used as a record and sounding board for users that have similar experiences in the future. Because of this, it is a great service to the community to ensure that reports meet these standards and follow the template closely.
+
+## The bug report template
+
+Below is the standard bug report template used by all of Obol's official repositories.
+
+```shell
+
+
+## Expected Behavior
+
+
+## Current Behavior
+
+
+## Steps to Reproduce
+
+1.
+2.
+3.
+4.
+5.
+
+## Detailed Description
+
+
+## Specifications
+
+Operating system:
+Version(s) used:
+
+## Possible Solution
+
+
+## Further Information
+
+
+ ## What is Charon?
+
+
+
+ ## Charon explained
+ ```
+
+#### Bold text
+
+Double asterisks `**` are used to define **boldface** text. Use bold text when the reader must interact with something displayed as text: buttons, hyperlinks, images with text in them, window names, and icons.
+
+```markdown
+In the **Login** window, enter your email into the **Username** field and click **Sign in**.
+```
+
+#### Italics
+
+Underscores `_` are used to define _italic_ text. Style the names of things in italics, except input fields or buttons:
+
+```markdown
+Here are some American things:
+
+- The _Spirit of St Louis_.
+- The _White House_.
+- The United States _Declaration of Independence_.
+
+```
+
+Quotes or sections of quoted text are styled in italics and surrounded by double quotes `"`:
+
+```markdown
+In the wise words of Winnie the Pooh _"People say nothing is impossible, but I do nothing every day."_
+```
+
+#### Code blocks
+
+Tag code blocks with the syntax of the core they are presenting:
+
+````markdown
+ ```javascript
+ console.log(error);
+ ```
+````
+
+#### List items
+
+All list items follow sentence structure. Only _names_ and _places_ are capitalized, along with the first letter of the list item. All other letters are lowercase:
+
+1. Never leave Nottingham without a sandwich.
+2. Brian May played guitar for Queen.
+3. Oranges.
+
+List items end with a period `.`, or a colon `:` if the list item has a sub-list:
+
+1. Charles Dickens novels:
+ 1. Oliver Twist.
+ 2. Nicholas Nickelby.
+ 3. David Copperfield.
+2. J.R.R Tolkien non-fiction books:
+ 1. The Hobbit.
+ 2. Silmarillion.
+ 3. Letters from Father Christmas.
+
+##### Unordered lists
+
+Use the dash character `-` for un-numbered list items:
+
+```markdown
+- An apple.
+- Three oranges.
+- As many lemons as you can carry.
+- Half a lime.
+```
+
+#### Special characters
+
+Whenever possible, spell out the name of the special character, followed by an example of the character itself within a code block.
+
+```markdown
+Use the dollar sign `$` to enter debug-mode.
+```
+
+#### Keyboard shortcuts
+
+When instructing the reader to use a keyboard shortcut, surround individual keys in code tags:
+
+```shell
+Press `ctrl` + `c` to copy the highlighted text.
+```
+
+The plus symbol `+` stays outside of the code tags.
+
+### Images
+
+The following rules and guidelines define how to use and store images.
+
+#### Storage location
+
+All images must be placed in the `/static/img` folder. For multiple images attributed to a single topic, a new folder within `/img/` may be needed.
+
+#### File names
+
+All file names are lower-case with dashes `-` between words, including image files:
+
+```text
+concepts/
+├── content-addressed-data.md
+├── images
+│ └── proof-of-spacetime
+│ └── post-diagram.png
+└── proof-of-replication.md
+└── proof-of-spacetime.md
+```
+
+_The framework and some information for this was forked from the original found on the [Filecoin documentation portal](https://docs.filecoin.io)_
diff --git a/docs/versioned_docs/version-v1.0.0/cf/feedback.md b/docs/versioned_docs/version-v1.0.0/cf/feedback.md
new file mode 100644
index 0000000000..01e4657078
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/cf/feedback.md
@@ -0,0 +1,6 @@
+# Feedback
+
+If you have followed our quickstart guides, and whether you succeeded or failed at running the distributed validator successfully, we would like to hear your feedback on the process and where you encountered difficulties.
+
+- Please let us know by joining and posting on our [Discord](https://discord.gg/n6ebKsX46w).
+- Also, feel free to add issues to our [GitHub repos](https://github.com/ObolNetwork).
diff --git a/docs/versioned_docs/version-v1.0.0/charon/README.md b/docs/versioned_docs/version-v1.0.0/charon/README.md
new file mode 100644
index 0000000000..44b46f1797
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/charon/README.md
@@ -0,0 +1,2 @@
+# charon
+
diff --git a/docs/versioned_docs/version-v1.0.0/charon/charon-cli-reference.md b/docs/versioned_docs/version-v1.0.0/charon/charon-cli-reference.md
new file mode 100644
index 0000000000..03913d9b0b
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/charon/charon-cli-reference.md
@@ -0,0 +1,505 @@
+---
+sidebar_position: 5
+description: >-
+ A go-based middleware client for taking part in Distributed Validator
+ clusters.
+---
+
+# CLI reference
+
+The following is a reference for Charon version [`v1.0.0`](https://github.com/ObolNetwork/charon/releases/tag/v1.0.0). Find the latest release on [our Github](https://github.com/ObolNetwork/charon/releases).
+
+The following are the top-level commands available to use.
+
+```markdown
+charon --help
+Charon enables the operation of Ethereum validators in a fault tolerant manner by splitting the validating keys across a group of trusted parties using threshold cryptography.
+
+Usage:
+ charon [command]
+
+Available Commands:
+ alpha Alpha subcommands provide early access to in-development features
+ combine Combine the private key shares of a distributed validator cluster into a set of standard validator private keys
+ completion Generate the autocompletion script for the specified shell
+ create Create artifacts for a distributed validator cluster
+ dkg Participate in a Distributed Key Generation ceremony
+ enr Print the ENR that identifies this client
+ exit Exit a distributed validator.
+ help Help about any command
+ relay Start a libp2p relay server
+ run Run the charon middleware client
+ version Print version and exit
+
+Flags:
+ -h, --help Help for charon
+
+Use "charon [command] --help" for more information about a command.
+```
+
+## The `create` command
+
+The `create` command handles the creation of artifacts needed by Charon to operate.
+
+```markdown
+charon create --help
+Create artifacts for a distributed validator cluster. These commands can be used to facilitate the creation of a distributed validator cluster between a group of operators by performing a distributed key generation ceremony, or they can be used to create a local cluster for single operator use cases.
+
+Usage:
+ charon create [command]
+
+Available Commands:
+ cluster Create private keys and configuration files needed to run a distributed validator cluster locally
+ dkg Create the configuration for a new Distributed Key Generation ceremony using charon dkg
+ enr Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Flags:
+ -h, --help Help for create
+
+Use "charon create [command] --help" for more information about a command.
+```
+
+### Creating an ENR for Charon
+
+An `enr` is an Ethereum Node Record. It is used to identify this Charon client to its other counterparty Charon clients across the internet.
+
+```markdown
+charon create enr --help
+Create an Ethereum Node Record (ENR) private key to identify this charon client
+
+Usage:
+ charon create enr [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data. (default ".charon")
+ -h, --help Help for enr
+```
+
+### Create a full cluster locally
+
+The `charon create cluster` command creates a set of distributed validators locally; including the private keys, a `cluster-lock.json` file, and deposit data. This command should only be used for solo-operation of distributed validators. To run a distributed validator cluster with a group of operators, it is preferable to create these artifacts using the [DV Launchpad](../dvl/intro.md) and the `charon dkg` command. That way, no single operator custodies all of the private keys to a distributed validator.
+
+:::warning This command produces new distributed validator private keys or handles and splits pre-existing traditional validator private keys, please use caution and keep these private keys securely backed up and secret. :::
+
+```markdown
+charon create cluster --help
+Creates a local charon cluster configuration including validator keys, charon p2p keys, cluster-lock.json and deposit-data.json file(s). See flags for supported features.
+
+Usage:
+ charon create cluster [flags]
+
+Flags:
+ --cluster-dir string The target folder to create the cluster in. (default "./")
+ --definition-file string Optional path to a cluster definition file or an HTTP URL. This overrides all other configuration flags.
+ --deposit-amounts ints List of partial deposit amounts (integers) in ETH. Values must sum up to exactly 32ETH.
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for cluster
+ --insecure-keys Generates insecure keystore files. This should never be used. It is not supported on mainnet.
+ --keymanager-addresses strings Comma separated list of keymanager URLs to import validator key shares to. Note that multiple addresses are required, one for each node in the cluster, with node0's keyshares being imported to the first address, node1's keyshares to the second, and so on.
+ --keymanager-auth-tokens strings Authentication bearer tokens to interact with the keymanager URLs. Don't include the "Bearer" symbol, only include the api-token.
+ --name string The cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky.
+ --nodes int The number of charon nodes in the cluster. Minimum is 3.
+ --num-validators int The number of distributed validators needed in the cluster.
+ --publish Publish lock file to obol-api.
+ --publish-address string The URL to publish the lock file to. (default "https://api.obol.tech")
+ --split-existing-keys Split an existing validator's private key into a set of distributed validator private key shares. Does not re-create deposit data for this key.
+ --split-keys-dir string Directory containing keys to split. Expects keys in keystore-*.json and passwords in keystore-*.txt. Requires --split-existing-keys.
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version of the custom test network (in hex).
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+ --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+### Creating the configuration for a DKG Ceremony
+
+This `charon create dkg` command creates a cluster\_definition file used for the `charon dkg` command.
+
+```markdown
+charon create dkg --help
+Create a cluster definition file that will be used by all participants of a DKG.
+
+Usage:
+ charon create dkg [flags]
+
+Flags:
+ --deposit-amounts ints List of partial deposit amounts (integers) in ETH. Values must sum up to exactly 32ETH.
+ --dkg-algorithm string DKG algorithm to use; default, frost (default "default")
+ --fee-recipient-addresses strings Comma separated list of Ethereum addresses of the fee recipient for each validator. Either provide a single fee recipient address or fee recipient addresses for each validator.
+ -h, --help Help for dkg
+ --name string Optional cosmetic cluster name
+ --network string Ethereum network to create validators for. Options: mainnet, goerli, gnosis, sepolia, holesky. (default "mainnet")
+ --num-validators int The number of distributed validators the cluster will manage (32ETH staked for each). (default 1)
+ --operator-enrs strings [REQUIRED] Comma-separated list of each operator's Charon ENR address.
+ --output-dir string The folder to write the output cluster-definition.json file to. (default ".charon")
+ -t, --threshold int Optional override of threshold required for signature reconstruction. Defaults to ceil(n*2/3) if zero. Warning, non-default values decrease security.
+ --withdrawal-addresses strings Comma separated list of Ethereum addresses to receive the returned stake and accrued rewards for each validator. Either provide a single withdrawal address or withdrawal addresses for each validator.
+```
+
+## The `dkg` command
+
+### Performing a DKG Ceremony
+
+The `charon dkg` command takes a `cluster_definition.json` file that instructs Charon on the terms of a new distributed validator cluster to be created. Charon establishes communication with the other nodes identified in the file, performs a distributed key generation ceremony to create the required threshold private keys, and signs deposit data for each new distributed validator. The command outputs the `cluster-lock.json` file and key shares for each Distributed Validator created.
+
+```markdown
+charon dkg --help
+Participate in a distributed key generation ceremony for a specific cluster definition that creates
+distributed validator key shares and a final cluster lock configuration. Note that all other cluster operators should run
+this command at the same time.
+
+Usage:
+ charon dkg [flags]
+
+Flags:
+ --data-dir string The directory where charon will store all its internal data. (default ".charon")
+ --definition-file string The path to the cluster definition file or an HTTP URL. (default ".charon/cluster-definition.json")
+ -h, --help Help for dkg
+ --keymanager-address string The keymanager URL to import validator keyshares.
+ --keymanager-auth-token string Authentication bearer token to interact with keymanager API. Don't include the "Bearer" symbol, only include the api-token.
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech,https://1.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --publish Publish the created cluster to a remote API.
+ --publish-address string The URL to publish the cluster to. (default "https://api.obol.tech")
+ --publish-timeout duration Timeout for publishing a cluster, consider increasing if the cluster contains more than 200 validators. (default 30s)
+ --shutdown-delay duration Graceful shutdown delay. (default 1s)
+ --timeout duration Timeout for the DKG process, should be increased if DKG times out. (default 1m0s)
+```
+
+## The `run` command
+
+### Run the Charon middleware
+
+This `run` command accepts a `cluster-lock.json` file that was created either via a `charon create cluster` command or `charon dkg`. This lock file outlines the nodes in the cluster and the distributed validators they operate on behalf of.
+
+```markdown
+charon run --help
+Starts the long-running Charon middleware process to perform distributed validator duties.
+
+Usage:
+ charon run [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs.
+ --builder-api Enables the builder api. Will only produce builder blocks. Builder API must also be enabled on the validator client. Beacon node must be connected to a builder-relay to access the builder network.
+ --debug-address string Listening address (ip and port) for the pprof and QBFT debug API. It is not enabled by default.
+ --feature-set string Minimum feature set to enable by default: alpha, beta, or stable. Warning: modify at own risk. (default "stable")
+ --feature-set-disable strings Comma-separated list of features to disable, overriding the default minimum feature set.
+ --feature-set-enable strings Comma-separated list of features to enable, overriding the default minimum feature set.
+ -h, --help Help for run
+ --jaeger-address string Listening address for jaeger tracing.
+ --jaeger-service string Service name used for jaeger tracing. (default "charon")
+ --lock-file string The path to the cluster lock file defining the distributed validator cluster. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --manifest-file string The path to the cluster manifest file. If both cluster manifest and cluster lock files are provided, the cluster manifest file takes precedence. (default ".charon/cluster-manifest.pb")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus). (default "127.0.0.1:3620")
+ --no-verify Disables cluster definition and lock file verification.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech,https://1.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --private-key-file-lock Enables private key locking to prevent multiple instances using the same key.
+ --simnet-beacon-mock Enables an internal mock beacon node for running a simnet.
+ --simnet-beacon-mock-fuzz Configures simnet beaconmock to return fuzzed responses.
+ --simnet-slot-duration duration Configures slot duration in simnet beacon mock. (default 1s)
+ --simnet-validator-keys-dir string The directory containing the simnet validator key shares. (default ".charon/validator_keys")
+ --simnet-validator-mock Enables an internal mock validator client when running a simnet. Requires simnet-beacon-mock.
+ --synthetic-block-proposals Enables additional synthetic block proposal duties. Used for testing of rare duties.
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version in hex of the custom test network.
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+ --validator-api-address string Listening address (ip and port) for validator-facing traffic proxying the beacon-node API. (default "127.0.0.1:3600")
+```
+
+## The `exit` command
+
+A running Charon client will [aggregate and broadcast](../start/quickstart-exit.md) signed exit messages it receives from its valdiator client immediately. These `exit` commands are instead used to _pre-sign_ exit messages for an active distributed validator, to save to disk, or to broadcast; once enough of the operators of the cluster have submitted their partial exit signatures. Fully signed exit messages give a user or protocol a guarantee that they can exit an active validator at any point in future without the further assistance of the cluster's operators. In future, [execution-layer initiated exits](https://eips.ethereum.org/EIPS/eip-7002) will provide an even stronger guarantee that a validator can be exited by the withdrawal address it belongs to.
+
+```markdown
+charon exit --help
+Sign and broadcast distributed validator exit messages using a remote API.
+
+Usage:
+ charon exit [command]
+
+Available Commands:
+ active-validator-list List all active validators
+ broadcast Submit partial exit message for a distributed validator
+ fetch Fetch a signed exit message from the remote API
+ sign Sign partial exit message for a distributed validator
+
+Flags:
+ -h, --help Help for exit
+
+Use "charon exit [command] --help" for more information about a command.
+```
+
+### Pre-sign exit messages for active validators
+
+:::warning This command requires Charon to access the distributed validator's private keys, please use caution and keep these private keys securely backed up and secret.
+
+The default `publish-address` for this command sends signed exit messages to Obol's [API](https://github.com/ObolNetwork/obol-docs/blob/main/api/README.md) for aggregation and distribution. Exit signatures are stored in line with Obol's [terms and contiditions](https://obol.tech/terms.pdf). :::
+
+This command submits partial exit signatures to the remote API for aggregation. The required flags are `--beacon-node-url` and `--validator-public-key` of the validator you wish to exit. An exit message can only be signed for a validator that is fully deposited and assigned a validator index.
+
+```markdown
+charon exit sign --help
+Sign a partial exit message for a distributed validator and submit it to a remote API for aggregation.
+
+Usage:
+ charon exit sign [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs. [REQUIRED]
+ --beacon-node-timeout duration Timeout for beacon node HTTP calls. (default 30s)
+ --exit-epoch uint Exit epoch at which the validator will exit, must be the same across all the partial exits. (default 162304)
+ -h, --help Help for sign
+ --lock-file string The path to the cluster lock file defining the distributed validator cluster. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --publish-address string The URL of the remote API. (default "https://api.obol.tech")
+ --publish-timeout duration Timeout for publishing a signed exit to the publish-address API. (default 30s)
+ --validator-index uint Validator index of the validator to exit, the associated public key must be present in the cluster lock manifest. If --validator-pubkey is also provided, validator liveliness won't be checked on the beacon chain.
+ --validator-keys-dir string Path to the directory containing the validator private key share files and passwords. (default ".charon/validator_keys")
+ --validator-public-key string Public key of the validator to exit, must be present in the cluster lock manifest. If --validator-index is also provided, validator liveliness won't be checked on the beacon chain.
+```
+
+### Download fully signed exit messages for cold storage
+
+Once enough operators have submitted their partial signatures for an active validator, you can use the `charon exit fetch` command to download the complete exit message to a file for safe keeping. This file can be given to a delegator who wants a guarantee that they can exit the distributed validator if need be.
+
+```markdown
+charon exit fetch --help
+Fetches a fully signed exit message for a given validator from the remote API and writes it to disk.
+
+Usage:
+ charon exit fetch [flags]
+
+Flags:
+ --fetched-exit-path string Path to store fetched signed exit messages. (default "./")
+ -h, --help Help for fetch
+ --lock-file string The path to the cluster lock file defining the distributed validator cluster. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --publish-address string The URL of the remote API. (default "https://api.obol.tech")
+ --publish-timeout duration Timeout for publishing a signed exit to the publish-address API. (default 30s)
+ --validator-public-key string Public key of the validator to exit, must be present in the cluster lock manifest. If --validator-index is also provided, validator liveliness won't be checked on the beacon chain. [REQUIRED]
+```
+
+### Broadcast a signed exit message
+
+The `charon exit broadcast` subcommand can be used to broadcast either a signed exit message from a file that was downloaded via the `fetch` command, or it can retrieve and broadcast an exit message directly from the API.
+
+```markdown
+charon exit broadcast --exit
+Retrieves and broadcasts to the configured beacon node a fully signed validator exit message, aggregated with the available partial signatures retrieved from the publish-address. Can also read a signed exit message from disk, in order to be broadcasted to the configured beacon node.
+
+Usage:
+ charon exit broadcast [flags]
+
+Flags:
+ --beacon-node-endpoints strings Comma separated list of one or more beacon node endpoint URLs. [REQUIRED]
+ --beacon-node-timeout duration Timeout for beacon node HTTP calls. (default 30s)
+ --exit-epoch uint Exit epoch at which the validator will exit, must be the same across all the partial exits. (default 162304)
+ --exit-from-file string Retrieves a signed exit message from a pre-prepared file instead of --publish-address.
+ -h, --help Help for broadcast
+ --lock-file string The path to the cluster lock file defining the distributed validator cluster. (default ".charon/cluster-lock.json")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --private-key-file string The path to the charon enr private key file. (default ".charon/charon-enr-private-key")
+ --publish-address string The URL of the remote API. (default "https://api.obol.tech")
+ --publish-timeout duration Timeout for publishing a signed exit to the publish-address API. (default 30s)
+ --validator-keys-dir string Path to the directory containing the validator private key share files and passwords. (default ".charon/validator_keys")
+ --validator-public-key string Public key of the validator to exit, must be present in the cluster lock manifest. If --validator-index is also provided, validator liveliness won't be checked on the beacon chain. [REQUIRED]
+```
+
+## The `combine` command
+
+### Combine distributed validator key shares into a single validator key
+
+The `combine` command combines many validator key shares into a single Ethereum validator key.
+
+:::warning This command requires Charon to access the distributed validator's private keys, please use caution and keep these private keys securely backed up and secret. :::
+
+```markdown
+charon combine --help
+Combines the private key shares from a threshold of operators in a distributed validator cluster into a set of validator private keys that can be imported into a standard Ethereum validator client.
+
+Warning: running the resulting private keys in a validator alongside the original distributed validator cluster *will* result in slashing.
+
+Usage:
+ charon combine [flags]
+
+Flags:
+ --cluster-dir string Parent directory containing a number of .charon subdirectories from the required threshold of nodes in the cluster. (default ".charon/cluster")
+ --force Overwrites private keys with the same name if present.
+ -h, --help Help for combine
+ --no-verify Disables cluster definition and lock file verification.
+ --output-dir string Directory to output the combined private keys to. (default "./validator_keys")
+ --testnet-chain-id uint Chain ID of the custom test network.
+ --testnet-fork-version string Genesis fork version of the custom test network (in hex).
+ --testnet-genesis-timestamp int Genesis timestamp of the custom test network.
+ --testnet-name string Name of the custom test network.
+```
+
+To run this command, one needs at least a threshold number of node operator's `.charon` directories, which need to be organized into a single folder:
+
+```shell
+tree ./cluster
+cluster/
+├── node0
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node1
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+├── node2
+│ ├── charon-enr-private-key
+│ ├── cluster-lock.json
+│ ├── deposit-data.json
+│ └── validator_keys
+│ ├── keystore-0.json
+│ ├── keystore-0.txt
+│ ├── keystore-1.json
+│ └── keystore-1.txt
+└── node3
+ ├── charon-enr-private-key
+ ├── cluster-lock.json
+ ├── deposit-data.json
+ └── validator_keys
+ ├── keystore-0.json
+ ├── keystore-0.txt
+ ├── keystore-1.json
+ └── keystore-1.txt
+```
+
+That is, each operator '.charon' directory must be placed in a parent directory, and renamed to avoid conflicts.
+
+If for example the lock file defines 2 validators, each `validator_keys` directory must contain exactly 4 files, a JSON and TXT file for each validator.
+
+Those files must be named with an increasing index associated with the validator in the lock file, starting from 0.
+
+The chosen folder name does not matter, as long as it's different from `.charon`.
+
+At the end of the process `combine` will create a new directory specified by `--output-dir` containing the traditional validator private keystore.
+
+```shell
+charon combine --cluster-dir="./cluster" --output-dir="./combined"
+tree ./combined
+combined
+├── keystore-0.json
+├── keystore-0.txt
+├── keystore-1.json
+└── keystore-1.txt
+```
+
+By default, the `combine` command will refuse to overwrite any private key that is already present in the destination directory.
+
+To force the process, use the `--force` flag.
+
+:::warning
+
+The generated private keys are in the standard [EIP-2335](https://github.com/ethereum/ercs/blob/master/ERCS/erc-2335.md) format, and can be imported in any Ethereum validator client that supports it.
+
+**Ensure your distributed validator cluster is completely shut down for at least two epochs before starting a replacement validator or you are likely to be slashed.** :::
+
+## Host a relay
+
+Relays run a libp2p [circuit relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) server that allows Charon clusters to perform peer discovery and for Charon clients behind strict NAT gateways to be communicated with. If you want to self-host a relay for your cluster(s) the following command will start one.
+
+```markdown
+charon relay --help
+Starts a libp2p circuit relay that charon clients can use to discover and connect to their peers.
+
+Usage:
+ charon relay [flags]
+
+Flags:
+ --auto-p2pkey Automatically create a p2pkey (secp256k1 private key used for p2p authentication and ENR) if none found in data directory. (default true)
+ --data-dir string The directory where charon will store all its internal data. (default ".charon")
+ --debug-address string Listening address (ip and port) for the pprof and QBFT debug API. It is not enabled by default.
+ -h, --help Help for relay
+ --http-address string Listening address (ip and port) for the relay http server serving runtime ENR. (default "127.0.0.1:3640")
+ --log-color string Log color; auto, force, disable. (default "auto")
+ --log-format string Log format; console, logfmt or json (default "console")
+ --log-level string Log level; debug, info, warn or error (default "info")
+ --log-output-path string Path in which to write on-disk logs.
+ --loki-addresses strings Enables sending of logfmt structured logs to these Loki log aggregation server addresses. This is in addition to normal stderr logs.
+ --loki-service string Service label sent with logs to Loki. (default "charon")
+ --monitoring-address string Listening address (ip and port) for the monitoring API (prometheus).
+ --p2p-advertise-private-addresses Enable advertising of libp2p auto-detected private addresses. This doesn't affect manually provided p2p-external-ip/hostname.
+ --p2p-disable-reuseport Disables TCP port reuse for outgoing libp2p connections.
+ --p2p-external-hostname string The DNS hostname advertised by libp2p. This may be used to advertise an external DNS.
+ --p2p-external-ip string The IP address advertised by libp2p. This may be used to advertise an external IP.
+ --p2p-max-connections int Libp2p maximum number of peers that can connect to this relay. (default 16384)
+ --p2p-max-reservations int Updates max circuit reservations per peer (each valid for 30min) (default 512)
+ --p2p-relay-loglevel string Libp2p circuit relay log level. E.g., debug, info, warn, error.
+ --p2p-relays strings Comma-separated list of libp2p relay URLs or multiaddrs. (default [https://0.relay.obol.tech,https://1.relay.obol.tech])
+ --p2p-tcp-address strings Comma-separated list of listening TCP addresses (ip and port) for libP2P traffic. Empty default doesn't bind to local port therefore only supports outgoing connections.
+```
+
+You can also consider adding [alternative public relays](../faq/risks.md) to your cluster by specifying a list of `p2p-relays` in [`charon run`](charon-cli-reference.md#run-the-charon-middleware).
+
+## Experimental commands
+
+These commands are subject to breaking changes until they are moved outside of the `alpha` subcommand in a future release.
+
+### Test your candidate distributed validator cluster
+
+Charon comes with a test suite for understanding the suitability and readiness of a given setup.
+
+```markdown
+charon alpha test --help
+Test subcommands provide test suite to evaluate current cluster setup. Currently there is support for peer connection tests, beacon node and validator API.
+
+Usage:
+ charon alpha test [command]
+
+Available Commands:
+ beacon Run multiple tests towards beacon nodes
+ peers Run multiple tests towards peer nodes
+ validator Run multiple tests towards validator client
+
+Flags:
+ -h, --help Help for test
+
+Use "charon alpha test [command] --help" for more information about a command.
+```
diff --git a/docs/versioned_docs/version-v1.0.0/charon/cluster-configuration.md b/docs/versioned_docs/version-v1.0.0/charon/cluster-configuration.md
new file mode 100644
index 0000000000..57100cb8c5
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/charon/cluster-configuration.md
@@ -0,0 +1,163 @@
+---
+description: Documenting a Distributed Validator Cluster in a standardised file format
+sidebar_position: 3
+---
+
+# Cluster configuration
+
+:::warning
+These cluster definition and cluster lock files are a work in progress. The intention is for the files to be standardised for operating distributed validators via the [EIP process](https://eips.ethereum.org/) when appropriate.
+:::
+
+This document describes the configuration options for running a Charon client or cluster.
+
+A Charon cluster is configured in two steps:
+
+- `cluster-definition.json` which defines the intended cluster configuration before keys have been created in a distributed key generation ceremony.
+- `cluster-lock.json` which includes and extends `cluster-definition.json` with distributed validator BLS public key shares.
+
+In the case of a solo operator running a cluster, the [`charon create cluster`](./charon-cli-reference.md#create-a-full-cluster-locally) command combines both steps into one and just outputs the final `cluster-lock.json` without a DKG step.
+
+## Cluster Definition File
+
+The `cluster-definition.json` is provided as input to the DKG which generates keys and the `cluster-lock.json` file.
+
+### Using the CLI
+
+The [`charon create dkg`](./charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) command is used to create the `cluster-definition.json` file which is used as input to `charon dkg`.
+
+The schema of the `cluster-definition.json` is defined as:
+
+```json
+{
+ "name": "best cluster", // Optional cosmetic identifier
+ "creator": {
+ "address": "0x123..abfc", //ETH1 address of the creator
+ "config_signature": "0x123654...abcedf" // EIP712 Signature of config_hash using creator privkey
+ },
+ "operators": [
+ {
+ "address": "0x123..abfc", // ETH1 address of the operator
+ "enr": "enr://abcdef...12345", // Charon node ENR
+ "config_signature": "0x123456...abcdef", // EIP712 Signature of config_hash by ETH1 address priv key
+ "enr_signature": "0x123654...abcedf" // EIP712 Signature of ENR by ETH1 address priv key
+ }
+ ],
+ "uuid": "1234-abcdef-1234-abcdef", // Random unique identifier.
+ "version": "v1.2.0", // Schema version
+ "timestamp": "2022-01-01T12:00:00+00:00", // Creation timestamp
+ "num_validators": 2, // Number of distributed validators to be created in cluster-lock.json
+ "threshold": 3, // Optional threshold required for signature reconstruction
+ "validators": [
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ },
+ {
+ "fee_recipient_address": "0x123..abfc", // ETH1 fee_recipient address of validator
+ "withdrawal_address": "0x123..abfc" // ETH1 withdrawal address of validator
+ }
+ ],
+ "dkg_algorithm": "foo_dkg_v1", // Optional DKG algorithm for key generation
+ "fork_version": "0x00112233", // Chain/Network identifier
+ "config_hash": "0xabcfde...acbfed", // Hash of the static (non-changing) fields
+ "definition_hash": "0xabcdef...abcedef" // Final hash of all fields
+}
+```
+
+### Using the DV Launchpad
+
+- A `leader/creator`, that wishes to coordinate the creation of a new Distributed Validator Cluster navigates to the launchpad and selects "Create new Cluster".
+- The `leader/creator` uses the user interface to configure all of the important details about the cluster including:
+ - The `Withdrawal Address` for the created validators;
+ - The `Fee Recipient Address` for block proposals if it differs from the withdrawal address;
+ - The number of distributed validators to create;
+ - The list of participants in the cluster specified by Ethereum address(/ENS);
+ - The threshold of fault tolerance required.
+- These key pieces of information form the basis of the cluster configuration. These fields (and some technical fields like DKG algorithm to use) are serialized and merklized to produce the definition's `cluster_definition_hash`. This merkle root will be used to confirm that there is no ambiguity or deviation between definitions when they are provided to Charon nodes.
+- Once the `leader/creator` is satisfied with the configuration they publish it to the launchpad's data availability layer for the other participants to access. (For early development the launchpad will use a centralized backend db to store the cluster configuration. Near production, solutions like IPFS or arweave may be more suitable for the long term decentralization of the launchpad.)
+
+## Cluster Lock File
+
+The `cluster-lock.json` has the following schema:
+
+```json
+{
+ "cluster_definition": {...}, // Cluster definiition json, identical schema to above,
+ "distributed_validators": [ // Length equal to cluster_definition.num_validators.
+ {
+ "distributed_public_key": "0x123..abfc", // DV root pubkey
+ "public_shares": [ "abc...fed", "cfd...bfe"], // Length equal to cluster_definition.operators
+ "fee_recipient": "0x123..abfc" // Defaults to withdrawal address if not set, can be edited manually
+ }
+ ],
+ "lock_hash": "abcdef...abcedef", // Config_hash plus distributed_validators
+ "signature_aggregate": "abcdef...abcedef" // BLS aggregate signature of the lock hash signed by each DV pubkey.
+}
+```
+
+## Cluster Size and Resilience
+
+The cluster size (the number of nodes/operators in the cluster) determines the resilience of the cluster; its ability remain operational under diverse failure scenarios.
+Larger clusters can tolerate more faulty nodes.
+However, increased cluster size implies higher operational costs and potential network latency, which may negatively affect performance.
+
+Optimal cluster size is therefore trade-off between resilience (larger is better) vs cost-efficiency and performance (smaller is better).
+
+Cluster resilience can be broadly classified into two categories:
+
+- **[Byzantine Fault Tolerance (BFT)](https://en.wikipedia.org/wiki/Byzantine_fault)** - the ability to tolerate nodes that are actively trying to disrupt the cluster.
+- **[Crash Fault Tolerance (CFT)](https://en.wikipedia.org/wiki/Fault_tolerance)** - the ability to tolerate nodes that have crashed or are otherwise unavailable.
+
+Different cluster sizes tolerate different counts of byzantine vs crash nodes.
+In practice, hardware and software crash relatively frequently, while byzantine behaviour is relatively uncommon.
+However, Byzantine Fault Tolerance is crucial for trust minimised systems like distributed validators.
+Thus, cluster size can be chosen to optimise for either BFT or CFT.
+
+The table below lists different cluster sizes and their characteristics:
+
+- `Cluster Size` - the number of nodes in the cluster.
+- `Threshold` - the minimum number of nodes that must collaborate to reach consensus quorum and to create signatures.
+- `BFT #` - the maximum number of byzantine nodes that can be tolerated.
+- `CFT #` - the maximum number of crashed nodes that can be tolerated.
+
+| Cluster Size | Threshold | BFT # | CFT # | Note |
+|--------------|-----------|-------|-------|------------------------------------|
+| 1 | 1 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 2 | 2 | 0 | 0 | ❌ Invalid: Not CFT nor BFT! |
+| 3 | 2 | 0 | 1 | ⚠️ Warning: CFT but not BFT! |
+| 4 | 3 | 1 | 1 | ✅ CFT and BFT optimal for 1 faulty |
+| 5 | 4 | 1 | 1 | |
+| 6 | 4 | 1 | 2 | ✅ CFT optimal for 2 crashed |
+| 7 | 5 | 2 | 2 | ✅ BFT optimal for 2 byzantine |
+| 8 | 6 | 2 | 2 | |
+| 9 | 6 | 2 | 3 | ✅ CFT optimal for 3 crashed |
+| 10 | 7 | 3 | 3 | ✅ BFT optimal for 3 byzantine |
+| 11 | 8 | 3 | 3 | |
+| 12 | 8 | 3 | 4 | ✅ CFT optimal for 4 crashed |
+| 13 | 9 | 4 | 4 | ✅ BFT optimal for 4 byzantine |
+| 14 | 10 | 4 | 4 | |
+| 15 | 10 | 4 | 5 | ✅ CFT optimal for 5 crashed |
+| 16 | 11 | 5 | 5 | ✅ BFT optimal for 5 byzantine |
+| 17 | 12 | 5 | 5 | |
+| 18 | 12 | 5 | 6 | ✅ CFT optimal for 6 crashed |
+| 19 | 13 | 6 | 6 | ✅ BFT optimal for 6 byzantine |
+| 20 | 14 | 6 | 6 | |
+| 21 | 14 | 6 | 7 | ✅ CFT optimal for 7 crashed |
+| 22 | 15 | 7 | 7 | ✅ BFT optimal for 7 byzantine |
+
+The table above is determined by the QBFT consensus algorithm with the
+following formulas from [this](https://arxiv.org/pdf/1909.10194.pdf) paper:
+
+```shell
+n = cluster size
+
+Threshold: min number of honest nodes required to reach quorum given size n
+Quorum(n) = ceiling(2n/3)
+
+BFT #: max number of faulty (byzantine) nodes given size n
+f(n) = floor((n-1)/3)
+
+CFT #: max number of unavailable (crashed) nodes given size n
+crashed(n) = n - Quorum(n)
+```
diff --git a/docs/versioned_docs/version-v1.0.0/charon/dkg.md b/docs/versioned_docs/version-v1.0.0/charon/dkg.md
new file mode 100644
index 0000000000..fa06e69cd9
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/charon/dkg.md
@@ -0,0 +1,73 @@
+---
+description: Generating private keys for a Distributed Validator requires a Distributed Key Generation (DKG) Ceremony.
+sidebar_position: 2
+---
+
+# Distributed Key Generation
+
+## Overview
+
+A [**distributed validator key**](../int/key-concepts.md#distributed-validator-key) is a group of BLS private keys that together operate as a threshold key for participating in proof-of-stake consensus.
+
+To make a distributed validator with no fault-tolerance (i.e. all nodes need to be online to sign every message), due to the BLS signature scheme used by Proof of Stake Ethereum, each key share could be chosen by operators independently. However, to create a distributed validator that can stay online despite a subset of its nodes going offline, the key shares need to be generated together (4 randomly chosen points on a graph don't all necessarily sit on the same order three curve). To do this in a secure manner with no one party being trusted to distribute the keys requires what is known as a [**distributed key generation ceremony**](../int/key-concepts.md#distributed-validator-key-generation-ceremony).
+
+The Charon client has the responsibility of securely completing a distributed key generation ceremony with its counterparty nodes. The ceremony configuration is outlined in a [cluster definition](../charon/cluster-configuration.md).
+
+## Actors Involved
+
+A distributed key generation ceremony involves `Operators` and their `Charon clients`.
+
+- An `Operator` is identified by their Ethereum address. They will sign a message with this address to authorize their Charon client to take part in the DKG ceremony.
+
+- A `Charon client` is also identified by a public/private key pair, in this instance, the public key is represented as an [Ethereum Node Record](https://eips.ethereum.org/EIPS/eip-778) (ENR). This is a standard identity format for both EL and CL clients. These ENRs are used by each Charon node to identify its cluster peers over the internet, and to communicate with one another in an [end to end encrypted manner](https://github.com/libp2p/go-libp2p/tree/master/p2p/security/noise). These keys need to be created (and backed up) by each operator before they can participate in a cluster creation.
+
+## Cluster Definition Creation
+
+This cluster definition specifies the intended cluster configuration before keys have been created in a distributed key generation ceremony. The `cluster-definition.json` file can be created with the help of the [Distributed Validator Launchpad](./cluster-configuration.md#using-the-dv-launchpad) or via the [CLI](./cluster-configuration.md#using-the-cli).
+
+## Carrying out the DKG ceremony
+
+Once all participants have signed the cluster definition, they can load the `cluster-definition` file into their Charon client, and the client will attempt to complete the DKG.
+
+Charon will read the ENRs in the definition, confirm that its ENR is present, and then will reach out to relays that are deployed to find the other ENRs on the network. (Fresh ENRs just have a public key and an IP address of 0.0.0.0 until they are loaded into a live Charon client, which will update the IP address and increment the ENRs nonce and resign with the clients private key. If an ENR with a higher nonce is seen by a Charon client, they will update the IP address of that ENR in their address book.)
+
+Once all clients in the cluster can establish a connection with one another and they each complete a handshake (confirm everyone has a matching `cluster_definition_hash`), the ceremony begins.
+
+No user input is required, Charon does the work and outputs the following files to each machine and then exits.
+
+## Backing up the ceremony artifacts
+
+At the end of a DKG ceremony, each operator will have a number of files outputted by their Charon client based on how many distributed validators the group chose to generate together.
+
+These files are:
+
+- **Validator keystore(s):** These files will be loaded into the operator's validator client and each file represents one share of a Distributed Validator.
+- **A distributed validator cluster lock file:** This `cluster-lock.json` file contains the configuration a distributed validator client like Charon needs to join a cluster capable of operating a number of distributed validators.
+- **Validator deposit data:** This file is used to activate one or more distributed validators on the Ethereum network.
+
+Once the ceremony is complete, all participants should take a backup of the created files. In future versions of Charon, if a participant loses access to these key shares, it will be possible to use a key re-sharing protocol to swap the participants old keys out of a distributed validator in favor of new keys, allowing the rest of a cluster to recover from a set of lost key shares. However for now, without a backup, the safest thing to do would be to exit the validator.
+
+## DKG Verification
+
+For many use cases of distributed validators, the funder/depositor of the validator may not be the same person as the key creators/node operators, as (outside of the base protocol) stake delegation is a common phenomenon. This handover of information introduces a point of trust. How does someone verify that a proposed validator `deposit data` corresponds to a real, fair, DKG with participants the depositor expects?
+
+There are a number of aspects to this trust surface that can be mitigated with a "Don't trust, verify" model. Verification for the time being is easier off chain, until things like a [BLS precompile](https://eips.ethereum.org/EIPS/eip-2537) are brought into the EVM, along with cheap ZKP verification on chain. Some of the questions that can be asked of Distributed Validator Key Generation Ceremonies include:
+
+- Do the public key shares combine together to form the group public key?
+ - This can be checked on chain as it does not require a pairing operation
+ - This can give confidence that a BLS pubkey represents a Distributed Validator, but does not say anything about the custody of the keys. (e.g. Was the ceremony sybil attacked, did they collude to reconstitute the group private key etc.)
+- Do the created BLS public keys attest to their `cluster_definition_hash`?
+ - This is to create a backwards link between newly created BLS public keys and the operator's eth1 addresses that took part in their creation.
+ - If a proposed distributed validator BLS group public key can produce a signature of the `cluster_definition_hash`, it can be inferred that at least a threshold of the operators signed this data.
+ - As the `cluster_definition_hash` is the same for all distributed validators created in the ceremony, the signatures can be aggregated into a group signature that verifies all created group keys at once. This makes it cheaper to verify a number of validators at once on chain.
+- Is there either a VSS or PVSS proof of a fair DKG ceremony?
+ - VSS (Verifiable Secret Sharing) means only operators can verify fairness, as the proof requires knowledge of one of the secrets.
+ - PVSS (Publicly Verifiable Secret Sharing) means anyone can verify fairness, as the proof is usually a Zero Knowledge Proof.
+ - A PVSS of a fair DKG would make it more difficult for operators to collude and undermine the security of the Distributed Validator.
+ - Zero Knowledge Proof verification on chain is currently expensive, but is becoming achievable through the hard work and research of the many ZK based teams in the industry.
+
+## Appendix
+
+### Sample Configuration and Lock Files
+
+Refer to the details [here](../charon/cluster-configuration.md).
diff --git a/docs/versioned_docs/version-v1.0.0/charon/intro.md b/docs/versioned_docs/version-v1.0.0/charon/intro.md
new file mode 100644
index 0000000000..2ac1a2426f
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/charon/intro.md
@@ -0,0 +1,68 @@
+---
+sidebar_position: 1
+description: Charon - The Distributed Validator Client
+---
+
+# Introduction
+
+This section introduces and outlines the Charon _\[kharon]_ middleware, Obol's implementation of DVT. Please see the [key concepts](../int/key-concepts.md) section as background and context.
+
+## What is Charon?
+
+Charon is a GoLang-based, HTTP middleware built by Obol to enable any existing Ethereum validator clients to operate together as part of a distributed validator.
+
+Charon sits as a middleware between a normal validating client and its connected beacon node, intercepting and proxying API traffic. Multiple Charon clients are configured to communicate together to come to consensus on validator duties and behave as a single unified proof-of-stake validator together. The nodes form a cluster that is _byzantine-fault tolerant_ and continues to progress assuming a supermajority of working/honest nodes is met.
+
+
+
+## Charon Architecture
+
+Charon is an Ethereum proof of stake distributed validator (DV) client. Like any validator client, its main purpose is to perform validation duties for the Beacon Chain, primarily attestations and block proposals. The beacon client handles a lot of the heavy lifting, leaving the validator client to focus on fetching duty data, signing that data, and submitting it back to the beacon client.
+
+Charon is designed as a generic event-driven workflow with different components coordinating to perform validation duties. All duties follow the same flow, the only difference being the signed data. The workflow can be divided into phases consisting of one or more components:
+
+
+
+### Determine **when** duties need to be performed
+
+The beacon chain is divided into [slots](https://eth2book.info/capella/part3/config/types/#slot) and [epochs](https://eth2book.info/capella/part3/config/types/#epoch), which divides it into deterministically fixed-size time chunks. The first step is to determine when (which slot/epoch) duties need to be performed. This is done by the `scheduler` component. It queries the beacon node to detect which validators defined in the cluster lock are active, and what duties they need to perform for the upcoming epoch and slots. When such a slot starts, the `scheduler` emits an event indicating which validator needs to perform what duty.
+
+### Fetch and come to consensus on **what** data to sign
+
+A DV cluster consists of multiple operators each provided with one of the M-of-N threshold BLS private key shares per validator. The key shares are imported into the validator clients which produce partial signatures. Charon threshold aggregates these partial signatures before broadcasting them to the Beacon Chain. _But to threshold aggregate partial signatures, each validator must sign the same data._ The cluster must therefore coordinate and come to a consensus on what data to sign.
+
+`Fetcher` fetches the unsigned duty data from the beacon node upon receiving an event from `Scheduler`. For attestations, this is the unsigned attestation, for block proposals, this is the unsigned block.
+
+The `Consensus` component listens to events from Fetcher and starts a [QBFT](https://docs.goquorum.consensys.net/configure-and-manage/configure/consensus-protocols/qbft/) consensus game with the other Charon nodes in the cluster for that specific duty and slot. When consensus is reached, the resulting unsigned duty data is stored in the `DutyDB`.
+
+### **Wait** for the VC to sign
+
+Charon is a **middleware** distributed validator client. That means Charon doesn’t have access to the validator private key shares and cannot sign anything on demand. Instead, operators import the key shares into industry-standard validator clients (VC) that are configured to connect to their local Charon client instead of their local Beacon node directly.
+
+Charon, therefore, serves the [Ethereum Beacon Node API](https://ethereum.github.io/beacon-APIs/#/) from the `ValidatorAPI` component and intercepts some endpoints while proxying other endpoints directly to the upstream Beacon node.
+
+The VC queries the `ValidatorAPI` for unsigned data which is retrieved from the `DutyDB`. It then signs it and submits it back to the `ValidatorAPI` which stores it in the `PartialSignatureDB`.
+
+### **Share** partial signatures
+
+The `PartialSignatureDB` stores the partially signed data submitted by the local Charon client’s VC. But it also stores all the partial signatures submitted by the VCs of other peers in the cluster. This is achieved by the `PartialSignatureExchange` component that exchanges partial signatures between all peers in the cluster. All Charon clients, therefore, store all partial signatures the cluster generates.
+
+### **Threshold Aggregate** partial signatures
+
+The `SignatureAggregator` is invoked as soon as sufficient (any M of N) partial signatures are stored in the `PartialSignatureDB`. It performs BLS threshold aggregation of the partial signatures resulting in a final signature that is valid for the beacon chain.
+
+### **Broadcast** final signature
+
+Finally, the `Broadcaster` component broadcasts the final threshold aggregated signature to the Beacon client, thereby completing the duty.
+
+### Ports
+
+The following is an outline of the services that can be exposed by Charon.
+
+* **:3600** - The validator REST API. This is the port that serves the consensus layer's [beacon node API](https://ethereum.github.io/beacon-APIs/). This is the port validator clients should talk to instead of their standard consensus client REST API port. Charon subsequently proxies these requests to the upstream consensus client specified by `--beacon-node-endpoints`.
+* **:3610** - Charon P2P port. This is the port that Charon clients use to communicate with one another via TCP. This endpoint should be port-forwarded on your router and exposed publicly, preferably on a static IP address. This IP address should then be set on the charon run command with `--p2p-external-ip` or `CHARON_P2P_EXTERNAL_IP`.
+* **:3620** - Monitoring port. This port hosts a webserver that serves Prometheus metrics on `/metrics`, a readiness endpoint on `/readyz` and a liveness endpoint on `/livez`, and a pprof server on `/debug/pprof`. This port should not be exposed publicly.
+
+## Getting started
+
+For more information on running Charon, take a look at our [Quickstart Guides](../start/quickstart_overview.md).
diff --git a/docs/versioned_docs/version-v1.0.0/charon/networking.md b/docs/versioned_docs/version-v1.0.0/charon/networking.md
new file mode 100644
index 0000000000..b04a2c394a
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/charon/networking.md
@@ -0,0 +1,84 @@
+---
+sidebar_position: 4
+description: Networking
+---
+
+# Charon networking
+
+## Overview
+
+This document describes Charon's networking model which can be divided into two parts: the [_internal validator stack_](networking.md#internal-validator-stack) and the [_external p2p network_](networking.md#external-p2p-network).
+
+## Internal Validator Stack
+
+\
+
+
+Charon is a middleware DVT client and is therefore connected to an upstream beacon node and a downstream validator client is connected to it. Each operator should run the whole validator stack (all 4 client software types), either on the same machine or on different machines. The networking between the nodes should be private and not exposed to the public internet.
+
+Related Charon configuration flags:
+
+* `--beacon-node-endpoints`: Connects Charon to one or more beacon nodes.
+* `--validator-api-address`: Address for Charon to listen on and serve requests from the validator client.
+
+## External P2P Network
+
+ The Charon clients in a DV cluster are connected to each other via a small p2p network consisting of only the clients in the cluster. Peer IP addresses are discovered via an external "relay" server. The p2p connections are over the public internet so the Charon p2p port must be publicly accessible. Charon leverages the popular [libp2p](https://libp2p.io/) protocol.
+
+Related [Charon configuration flags](charon-cli-reference.md):
+
+* `--p2p-tcp-addresses`: Addresses for Charon to listen on and serve p2p requests.
+* `--p2p-relays`: Connect Charon to one or more relay servers.
+* `--private-key-file`: Private key identifying the Charon client.
+
+### LibP2P Authentication and Security
+
+Each Charon client has a secp256k1 private key. The associated public key is encoded into the [cluster lock file](cluster-configuration.md#Cluster-Lock-File) to identify the nodes in the cluster. For ease of use and to align with the Ethereum ecosystem, Charon encodes these public keys in the [ENR format](https://eips.ethereum.org/EIPS/eip-778), not in [libp2p’s Peer ID format](https://docs.libp2p.io/concepts/fundamentals/peers/).
+
+:::warning Each Charon node's secp256k1 private key is critical for authentication and must be kept secure to prevent cluster compromise.
+
+Do not use the same key across multiple clusters, as this can lead to security issues.
+
+For more on p2p security, refer to [libp2p's article](https://docs.libp2p.io/concepts/security/security-considerations). :::
+
+Charon currently only supports libp2p tcp connections with [noise](https://noiseprotocol.org/) security and only accepts incoming libp2p connections from peers defined in the cluster lock.
+
+### LibP2P Relays and Peer Discovery
+
+Relays are simple libp2p servers that are publicly accessible supporting the [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/) protocol. Circuit-relay is a libp2p transport protocol that routes traffic between two peers over a third-party “relay” peer.
+
+Obol hosts a publicly accessible relay at https://0.relay.obol.tech and will work with other organisations in the community to host alternatives Anyone can host their own relay server for their DV cluster.
+
+Each Charon node knows which peers are in the cluster from the ENRs in the cluster lock file, but their IP addresses are unknown. By connecting to the same relay, nodes establish “relay connections” to each other. Once connected via relay they exchange their known public addresses via libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol. The relay connection is then upgraded to a direct connection. If a node’s public IP changes, nodes once again connect via relay, exchange the new IP, and then connect directly once again.
+
+Note that in order for two peers to discover each other, they must connect to the same relay. Cluster operators should therefore coordinate which relays to use.
+
+Libp2p’s [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify) protocol attempts to automatically detect the public IP address of a Charon client without the need to explicitly configure it. If this however fails, the following two configuration flags can be used to explicitly set the publicly advertised address:
+
+* `--p2p-external-ip`: Explicitly sets the external IP address.
+* `--p2p-external-hostname`: Explicitly sets the external DNS host name.
+
+:::warning If a pair of Charon clients are not publicly accessible, due to being behind a NAT, they will not be able to upgrade their relay connections to a direct connection. Even though this is supported, it isn’t recommended as relay connections introduce additional latency and reduced throughput and will result in decreased validator effectiveness and possible missed block proposals and attestations. :::
+
+Libp2p’s circuit-relay connections are end-to-end encrypted, even though relay servers accept connections between nodes from multiple different clusters, relays are merely routing opaque connections. And since Charon only accepts incoming connections from other peers in its cluster, the use of a relay doesn’t allow connections between clusters.
+
+Only the following three libp2p protocols are established between a Charon node and a relay itself:
+
+* [circuit-relay](https://docs.libp2p.io/concepts/nat/circuit-relay/): To establish relay e2e encrypted connections between two peers in a cluster.\
+
+* [identify](https://docs.libp2p.io/concepts/fundamentals/protocols/#identify): Auto-detection of public IP addresses to share with other peers in the cluster.\
+
+* [peerinfo](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfo.go): Exchanges basic application [metadata](https://github.com/ObolNetwork/charon/blob/main/app/peerinfo/peerinfopb/v1/peerinfo.proto) for improved operational metrics and observability.\
+
+
+All other Charon protocols are only established between nodes in the same cluster.
+
+### Scalable Relay Clusters
+
+In order for a Charon client to connect to a relay, it needs the relay's [multiaddr](https://docs.libp2p.io/concepts/fundamentals/addressing/) (containing its public key and IP address). But a single multiaddr can only point to a single relay server which can easily be overloaded if too many clusters connect to it. Charon therefore supports resolving a relay’s multiaddr via HTTP GET request. Since Charon also includes the unique `cluster-hash` header in this request, the relay provider can use [consistent header-based load-balancing](https://cloud.google.com/load-balancing/docs/https/traffic-management-global#traffic_steering_header-based_routing) to map clusters to one of many relays using a single HTTP address.
+
+The relay supports serving its runtime public multiaddrs via its `--http-address` flag.
+
+E.g., https://0.relay.obol.tech is actually a load-balancer that routes HTTP requests to one of many relays based on the `cluster-hash` header returning the target relay’s multiaddr which the Charon client then uses to connect to that relay.
+
+The charon `--p2p-relays` flag therefore supports both multiaddrs as well as HTTP URls.
diff --git a/docs/versioned_docs/version-v1.0.0/dvl/README.md b/docs/versioned_docs/version-v1.0.0/dvl/README.md
new file mode 100644
index 0000000000..1b694a8473
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/dvl/README.md
@@ -0,0 +1,2 @@
+# dvl
+
diff --git a/docs/versioned_docs/version-v1.0.0/dvl/intro.md b/docs/versioned_docs/version-v1.0.0/dvl/intro.md
new file mode 100644
index 0000000000..65199b4ce5
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/dvl/intro.md
@@ -0,0 +1,27 @@
+---
+sidebar_position: 6
+description: A dapp to securely create Distributed Validators alone or with a group.
+---
+
+# DV Launchpad
+
+
+
+In order to activate an Ethereum validator, 32 ETH must be deposited into the official deposit contract.
+
+The vast majority of users that created validators to date have used the [~~**Eth2**~~** Staking Launchpad**](https://launchpad.ethereum.org/), a public good open source website built by the Ethereum Foundation alongside participants that later went on to found Obol. This tool has been wildly successful in the safe and educational creation of a significant number of validators on the Ethereum mainnet.
+
+To facilitate the generation of distributed validator keys amongst remote users with high trust, the Obol Network developed and maintains a website that enables a group of users to come together and create these threshold keys: **The DV Launchpad**.
+
+## Getting started
+
+For more information on running Charon in a UI friendly way through the DV Launchpad, take a look at our [Quickstart Guides](../start/quickstart_overview.md).
+
+## DV Launchpad Links
+
+| Ethereum Network | Launchpad |
+| ---------------- | --------------------------------------- |
+| Mainnet | https://beta.launchpad.obol.tech |
+| Gnosis Chain | https://gnosischain.launchpad.obol.tech |
+| Holesky | https://holesky.launchpad.obol.tech |
+| Sepolia | https://sepolia.launchpad.obol.tech |
diff --git a/docs/versioned_docs/version-v1.0.0/faq/README.md b/docs/versioned_docs/version-v1.0.0/faq/README.md
new file mode 100644
index 0000000000..456ad9139a
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/faq/README.md
@@ -0,0 +1,2 @@
+# faq
+
diff --git a/docs/versioned_docs/version-v1.0.0/faq/dkg_failure.md b/docs/versioned_docs/version-v1.0.0/faq/dkg_failure.md
new file mode 100644
index 0000000000..68edf12b71
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/faq/dkg_failure.md
@@ -0,0 +1,83 @@
+---
+sidebar_position: 4
+description: Handling DKG failure
+---
+
+# Handling DKG failure
+
+While the DKG process has been tested and validated against many different configuration instances, it can still encounter issues which might result in failure.
+
+Our DKG is designed in a way that doesn't allow for inconsistent results: either it finishes correctly for every peer, or it fails.
+
+This is a **safety** feature: you don't want to deposit an Ethereum distributed validator that not every operator is able to participate to, or even reach threshold for.
+
+The most common source of issues lies in the network stack: if any of the peer's Internet connection glitches substantially, the DKG will fail.
+
+Charon's DKG doesn't allow peer reconnection once the process is started, but it does allow for re-connections before that.
+
+When you see the following message:
+
+```log
+14:08:34.505 INFO dkg Waiting to connect to all peers...
+```
+
+this means your Charon instance is waiting for all the other cluster peers to start their DKG process: at this stage, peers can disconnect and reconnect at will, the DKG process will still continue.
+
+A log line will confirm the connection of a new peer:
+
+```log
+14:08:34.523 INFO dkg Connected to peer 1 of 3 {"peer": "fantastic-adult"}
+14:08:34.529 INFO dkg Connected to peer 2 of 3 {"peer": "crazy-bunch"}
+14:08:34.673 INFO dkg Connected to peer 3 of 3 {"peer": "considerate-park"}
+```
+
+As soon as all the peers are connected, this message will be shown:
+
+```log
+14:08:34.924 INFO dkg All peers connected, starting DKG ceremony
+```
+
+Past this stage **no disconnections are allowed**, and _all peers must leave their terminals open_ in order for the DKG process to complete: this is a synchronous phase, and every peer is required in order to reach completion.
+
+If for some reason the DKG process fails, you would see error logs that resemble this:
+
+```log
+14:28:46.691 ERRO cmd Fatal error: sync step: p2p connection failed, please retry DKG: context canceled
+```
+
+As the error message suggests, the DKG process needs to be retried.
+
+## Cleaning up the `.charon` directory
+
+One cannot simply retry the DKG process: Charon refuses to overwrite any runtime file in order to avoid inconsistencies and private key loss.
+
+When attempting to re-run a DKG with an unclean data directory - which is either `.charon` or what was specified with the `--data-dir` CLI parameter - this is the error that will be shown:
+
+```log
+14:44:13.448 ERRO cmd Fatal error: data directory not clean, cannot continue {"disallowed_entity": "cluster-lock.json", "data-dir": "/compose/node0"}
+```
+
+The `disallowed_entity` field lists all the files that Charon refuses to overwrite, while `data-dir` is the full path of the runtime directory the DKG process is using.
+
+In order to retry the DKG process one must delete the following entities, if present:
+
+- `validator_keys` directory
+- `cluster-lock.json` file
+- `deposit-data.json` file
+
+:::warning
+The `charon-enr-private-key` file **must be preserved**, failure to do so requires the DKG process to be restarted from the beginning by creating a new cluster definition.
+:::
+If you're doing a DKG with a custom cluster definition - for example, create with `charon create dkg`, rather than the Obol Launchpad - you can re-use the same file.
+
+Once this process has been completed, the cluster operators can retry a DKG.
+
+## Further debugging
+
+If for some reason the DKG process fails again, node operators are adviced to reach out to the Obol team by opening an [issue](https://github.com/ObolNetwork/charon/issues), detailing what troubleshooting steps were taken and providing **debug logs**.
+
+To enable debug logs first clean up the Charon data directory as explained in [the previous paragraph](#cleaning-up-the-charon-directory), then run your DKG command by appending `--log-level=debug` at the end.
+
+In order for the Obol team to debug your issue as quickly and precisely as possible please provide full logs in textual form, not through screenshots or display photos.
+
+Providing complete logs is particularly important, since it allows the team to reconstruct precisely what happened.
diff --git a/docs/versioned_docs/version-v1.0.0/faq/general.md b/docs/versioned_docs/version-v1.0.0/faq/general.md
new file mode 100644
index 0000000000..dcd800773f
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/faq/general.md
@@ -0,0 +1,121 @@
+---
+sidebar_position: 1
+description: Frequently asked questions
+---
+
+# general
+
+import Tabs from "@theme/Tabs"; import TabItem from "@theme/TabItem";
+
+## Frequently asked questions
+
+### General
+
+#### Does Obol have a token?
+
+No. Distributed validators use only Ether.
+
+#### Where can I learn more about Distributed Validators?
+
+Have you checked out our [blog site](https://blog.obol.tech) and [twitter](https://twitter.com/ObolNetwork) yet? Maybe join our [discord](https://discord.gg/n6ebKsX46w) too.
+
+#### Where does the name Charon come from?
+
+[Charon](https://www.theoi.com/Khthonios/Kharon.html) \[kharon] is the Ancient Greek Ferryman of the Dead. He was tasked with bringing people across the Acheron river to the underworld. His fee was one Obol coin, placed in the mouth of the deceased. This tradition of placing a coin or Obol in the mouth of the deceased continues to this day across the Greek world.
+
+#### What are the hardware requirements for running a Charon node?
+
+Charon alone uses negligible disk space of not more than a few MBs. However, if you are running your consensus client and execution client on the same server as Charon, then you will typically need the same hardware as running a full Ethereum node:
+
+| | Charon + VC | Beacon Node |
+| ---------------------- | ----------- | ----------- |
+| **CPU\*** | 1 | 2 |
+| **RAM** | 2 | 16 |
+| **Storage** | 100 MB | 2 TB |
+| **Internet Bandwidth** | 10 Mb/s | 10 Mb/s |
+
+| | Charon + VC | Beacon Node |
+| ---------------------- | ----------- | ----------- |
+| **CPU\*** | 2 | 4 |
+| **RAM** | 3 | 24 |
+| **Storage** | 100 MB | 2 TB |
+| **Internet Bandwidth** | 25 Mb/s | 25 Mb/s |
+
+| | Charon + VC | Beacon Node |
+| ---------------------- | ----------- | ----------- |
+| **CPU\*** | 2 | 8 |
+| **RAM** | 4 | 32 |
+| **Storage** | 100 MB | 2 TB |
+| **Internet Bandwidth** | 100 Mb/s | 100 Mb/s |
+
+\*if using vCPU, aim for 2x the above amounts
+
+For more hardware considerations, check out the [ethereum.org guides](https://ethereum.org/en/developers/docs/nodes-and-clients/run-a-node/#environment-and-hardware) which explores various setups and trade-offs, such as running the node locally or in the cloud.
+
+For now, Geth, Teku & Lighthouse clients are packaged within the docker compose file provided in the [quickstart guides](../start/quickstart_overview.md), so you don't have to install anything else to run a cluster. Just make sure you give them some time to sync once you start running your node.
+
+#### What is the difference between a node, a validator and a cluster?
+
+A node is a single instance of Ethereum EL+CL clients that can communicate with other nodes to maintain the Ethereum blockchain.
+
+A validator is a node that participates in the consensus process by verifying transactions and creating new blocks. Multiple validators can run from the same node.
+
+A cluster is a group of nodes that act together as one or several validators, which allows for a more efficient use of resources, reduces operational costs, and provides better reliability and fault tolerance.
+
+#### Can I migrate an existing Charon node to a new machine?
+
+It is possible to migrate your Charon node to another machine running the same config by moving the `.charon` folder with its contents to your new machine. Make sure the EL and CL on the new machine are synced before proceeding to the move to minimize downtime.
+
+### Distributed Key Generation
+
+#### What are the min and max numbers of operators for a Distributed Validator?
+
+Currently, the minimum is 4 operators with a threshold of 3.
+
+The threshold (aka quorum) corresponds to the minimum numbers of operators that need to be active for the validator(s) to be able to perform its duties. It is defined by the following formula `n-(ceil(n/3)-1)`. We strongly recommend using this default threshold in your DKG as it maximises liveness while maintaining BFT safety. Setting a 4 out of 4 cluster for example, would make your validator more vulnerable to going offline instead of less vulnerable. You can check the recommended threshold values for a cluster [here](../int/key-concepts.md).
+
+### Obol Splits
+
+#### What are Obol Splits?
+
+Obol Splits refers to a collection of composable smart contracts that enable the splitting of validator rewards and/or principal in a non-custodial, trust-minimised manner. Obol Splits contains integrations to enable DVs within Lido, Eigenlayer, and in future a number of other LSPs.
+
+#### Are Obol Splits non-custodial?
+
+Yes. Unless you were to decide to [deploy an editable splitter contract](general.md#can-i-change-the-percentages-in-a-split), Obol Splits are immutable, non-upgradeable, non-custodial, and oracle-free.
+
+#### Can I change the percentages in a split?
+
+Generally Obol Splits are deployed in an immutable fashion, meaning you cannot edit the percentages after deployment. However, if you were to choose to deploy a _controllable_ splitter contract when creating your Split, then yes, the address you select as controller can update the split percentages arbitrarily. A common pattern for this use case is to use a Gnosis SAFE as the controller address for the split, giving a group of entities (usually the operators and principal provider) the ability to update the percentages if need be. A well known example of this pattern is the [Protocol Guild](https://protocol-guild.readthedocs.io/en/latest/03-onchain-architecture.html).
+
+#### How do Obol Splits work?
+
+You can read more about how Obol Splits work [here](../sc/introducing-obol-splits.md).
+
+#### Are Obol Splits open source?
+
+Yes, Obol Splits are licensed under GPLv3 and the source code is available [here](https://github.com/ObolNetwork/obol-splits).
+
+#### Are Obol Splits audited?
+
+The Obol Splits contracts have been audited, though further development has continued on the contracts since. Consult the audit results [here](../sec/smart_contract_audit.md).
+
+#### Are the Obol Splits contracts verified on Etherscan?
+
+Yes, you can view the verified contracts on Etherscan. A list of the contract deployments can be found [here](https://github.com/ObolNetwork/obol-splits?#deployment).
+
+#### Does my cold wallet have to call the Obol Splits contracts?
+
+No. Any address can trigger the contracts to move the funds, they do not need to be a member of the Split either. You can set your cold wallet/custodian address as the recipient of the principal and rewards, and use any hot wallet to pay the gas fees to push the ether into the recipient address.
+
+#### Are there any edge cases I should be aware of when using Obol Splits?
+
+The most important decision is to be aware of whether or not the Split contract you are using has been set up with editability. If a splitter is editable, you should understand what the address that can edit the split does. Is the editor an EOA? Who controls that address? How secure is their seed phrase? Is it a smart contract? What can that contract do? Can the controller contract be upgraded? etc. Generally, the safest thing in Obol's perspective is not to have an editable splitter, and if in future you are unhappy with the configuration, that you exit the validator and create a fresh cluster with new settings that fit your needs.
+
+Another aspect to be aware of is how the splitting of principal from rewards works using the Optimistic Withdrawal Recipient contract. There are edge cases relating to not calling the contracts periodically or ahead of a withdrawal, activating more validators than the contract was configured for, and a worst case mass slashing on the network. Consult the documentation on the contract [here](../sc/introducing-obol-splits.md#optimistic-withdrawal-recipient), its audit [here](../sec/smart_contract_audit.md), and follow up with the core team if you have further questions.
+
+### Debugging Errors in Logs
+
+You can check if the containers on your node are outputting errors by running `docker compose logs` on a machine with a running cluster.
+
+Diagnose some common errors and view their resolutions [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/faq/errors.mdx).
diff --git a/docs/versioned_docs/version-v1.0.0/faq/peer_score.md b/docs/versioned_docs/version-v1.0.0/faq/peer_score.md
new file mode 100644
index 0000000000..44092da068
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/faq/peer_score.md
@@ -0,0 +1,47 @@
+---
+sidebar_position: 5
+description: Measuring Individual Performance in Distributed Validators
+---
+
+# Peer Score
+
+## Introduction
+
+Validator effectiveness is a critical metric for assessing the health of a rated network. It determines how well validators perform their attestation and block proposal duties. Existing solutions, like RAVER (Rated Validator Effectiveness Rating), provide a effectiveness score of a validator. In a monolithic validator that is run by a single operator, validator effectiveness can be considered as a proxy for the effectiveness or “score” of that operator. However, this approach falls short when dealing with distributed validators (DVs) maintained by multiple operators.
+
+Peer Score v0 addresses this limitation by introducing a method to evaluate the performance of individual operators within a DV. This enables a more granular assessment of contribution within a distributed setting.
+
+## Key Concepts
+
+- **Distributed Validator (DV):** A validator maintained by a group of operators in a fault-tolerant manner.
+- **Peer:** An individual operator contributing to a DV.
+- **Peer Score:** A metric reflecting the performance of a peer within a DV, calculated as the ratio of completed duties to expected duties.
+- **Operator Score:** An aggregated metric representing the overall effectiveness of an operator across multiple DVs (planned for future iterations).
+
+## Challenges with RAVER in DVs
+
+RAVER assigns a single effectiveness score to the entire DV. This score doesn't reflect the individual contributions of operators within the group. For example, a DV with 95% effectiveness maintained by four operators (A, B, C, and D) doesn't guarantee that each operator has a 95% effectiveness score. It's possible that even if operator D is frequently offline, the remaining operators (A, B, and C) can maintain the overall DV effectiveness.
+
+## Peer Score v0 Calculation
+
+Peer Score v0 utilizes a straightforward formula:
+
+`Peer Score = (Total duties completed by peer) / (Total duties expected by peer)`
+
+This ratio reflects the peer's adherence to its assigned duties within the DV.
+
+## Future Iterations
+
+Peer Score v0 lays the foundation for a more comprehensive evaluation system. Planned advancements include:
+
+- **Weighted Duties:** Assigning varying weights to different duties based on their significance to the network.
+- **Decentralization Scores:** Integrating metrics that consider the decentralization of clients and operator locations.
+- Peer rating: an anonymous rating peers can give to their other peers to grade their social co-ordination.
+
+## Use Cases
+
+Peer Score offers valuable insights for various stakeholders:
+
+- **Staking/Restaking Protocols:** Peer Score is crucial component of Obol’s Techne Credential Program. LSPs and LRPs can utilize Techne Credentials ,and hence Peer Score, to identify efficient operators for expanding their operator sets.
+- **DV Operators:** Forming operator collectives based on peer effectiveness and potentially removing underperforming peers from DVs (with Charon v2 cluster mutability).
+- **DV Software Developers:** Establishing a standardized metric for evaluating operator performance across various DV software, enabling the development of new tools and services.
diff --git a/docs/versioned_docs/version-v1.0.0/faq/risks.md b/docs/versioned_docs/version-v1.0.0/faq/risks.md
new file mode 100644
index 0000000000..f4b521517d
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/faq/risks.md
@@ -0,0 +1,44 @@
+---
+sidebar_position: 3
+description: Centralization Risks and mitigation
+---
+
+# Centralization risks and mitigation
+
+## Risk: Obol hosting the relay infrastructure
+
+**Mitigation**: Self-host a relay.
+
+One of the risks associated with Obol hosting the [LibP2P relays](../charon/networking.md) infrastructure allowing peer discovery is that if Obol-hosted relays go down, peers won't be able to discover each other and perform the DKG. To mitigate this risk, external organizations and node operators can consider self-hosting a relay. This way, if Obol's relays go down, the clusters can still operate through other relays in the network. Ensure that all nodes in the cluster use the same relays, or they will not be able to find each other if they are connected to different relays.
+
+The following non-Obol entities run relays that you can consider adding to your cluster (you can have more than one per cluster, see the `--p2p-relays` flag of [`charon run`](../charon/charon-cli-reference.md#the-run-command)):
+
+| Entity | Relay URL |
+|-----------|---------------------------------------|
+| [DSRV](https://www.dsrvlabs.com/) | https://charon-relay.dsrvlabs.dev |
+| [Infstones](https://infstones.com/) | https://obol-relay.infstones.com/ |
+| [Hashquark](https://www.hashquark.io/) | https://relay-2.prod-relay.721.land/ |
+| [Figment](https://figment.io/) | https://relay-1.obol.figment.io/ |
+| [Node Guardians](https://nodeguardians.io/) | https://obol-relay.nodeguardians.io/ |
+
+## Risk: Obol being able to update Charon code
+
+**Mitigation**: Pin specific docker versions or compile from source on a trusted commit.
+
+Another risk associated with Obol is the Labs team having the ability to update the [Charon code](https://github.com/ObolNetwork/charon) used by node operators within DV clusters, which could introduce vulnerabilities or malicious code. To mitigate this risk, operators can consider pinning specific versions of the Docker image or git repo that have been [thoroughly tested](../sec/overview.md#list-of-security-audits-and-assessments) and accepted by the network. This would ensure that any updates are carefully vetted and reviewed by the community, and only introduced into a running cluster gradually. The labs team will strive to communicate the security or operational impact any Charon update entails, giving operators the chance to decide whether they want potential performance or quality of experience improvements, or whether they remain on a trusted version for longer.
+
+## Risk: Obol hosting the DV Launchpad
+
+**Mitigation**: Use [`create cluster`](../charon/charon-cli-reference.md#the-create-command) or [`create dkg`](../charon/charon-cli-reference.md#creating-the-configuration-for-a-dkg-ceremony) locally and distribute the files manually.
+
+Hosting the first Charon frontend, the [DV Launchpad](../dvl/intro.md), on a centralized server could create a single point of failure, as users would have to rely on Obol's server to access the protocol. This could limit the decentralization of the protocol and could make it vulnerable to attacks or downtime. Obol hosting the launchpad on a decentralized network, such as IPFS is a first step but not enough. This is why the Charon code is open-source and contains a CLI interface to interact with the protocol locally.
+
+To mitigate the risk of launchpad failure, consider using the `create cluster` or `create dkg` commands locally and distributing the key shares files manually.
+
+## Risk: Obol going bust/rogue
+
+**Mitigation**: Use key recovery.
+
+The final centralization risk associated with Obol is the possibility of the company going bankrupt or acting maliciously, which would lead to a loss of control over the network and potentially cause damage to the ecosystem. To mitigate this risk, Obol has implemented a key recovery mechanism. This would allow the clusters to continue operating and to retrieve full private keys even if Obol is no longer able to provide support.
+
+A guide to recombine key shares into a single private key can be accessed [here](../advanced/quickstart-combine.md).
diff --git a/docs/versioned_docs/version-v1.0.0/fr/README.md b/docs/versioned_docs/version-v1.0.0/fr/README.md
new file mode 100644
index 0000000000..576a013d87
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/fr/README.md
@@ -0,0 +1,2 @@
+# fr
+
diff --git a/docs/versioned_docs/version-v1.0.0/fr/ethereum_and_dvt.md b/docs/versioned_docs/version-v1.0.0/fr/ethereum_and_dvt.md
new file mode 100644
index 0000000000..8e7857696c
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/fr/ethereum_and_dvt.md
@@ -0,0 +1,54 @@
+---
+sidebar_position: 4
+description: Ethereum and its relationship with DVT
+---
+
+# Ethereum and its Relationship with DVT
+
+Our goal for this page is to equip you with the foundational knowledge needed to actively contribute to the advancement of Obol while also directing you to valuable Ethereum and DVT related resources. Additionally, we will shed light on the intersection of DVT and Ethereum, offering curated articles and blog posts to enhance your understanding.
+
+## **Understanding Ethereum**
+
+To grasp the current landscape of Ethereum's PoS development, we encourage you to delve into the wealth of information available on the [Official Ethereum Website.](https://ethereum.org/en/learn/) The Ethereum website serves as a hub for all things Ethereum, catering to individuals at various levels of expertise, whether you're just starting your journey or are an Ethereum veteran. Here, you'll find a trove of resources that cater to diverse learning needs and preferences, ensuring that there's something valuable for everyone in the Ethereum community to discover.
+
+## **DVT & Ethereum**
+
+### Distributed Validator Technology
+
+> "Distributed validator technology (DVT) is an approach to validator security that spreads out key management and signing responsibilities across multiple parties, to reduce single points of failure, and increase validator resiliency.
+>
+> It does this by splitting the private key used to secure a validator across many computers organized into a "cluster". The benefit of this is that it makes it very difficult for attackers to gain access to the key, because it is not stored in full on any single machine. It also allows for some nodes to go offline, as the necessary signing can be done by a subset of the machines in each cluster. This reduces single points of failure from the network and makes the whole validator set more robust." _(ethereum.org, 2023)_
+
+#### Learn More About Distributed Validator technology from [The Official Ethereum Website](https://ethereum.org/en/staking/dvt/)
+
+### How Does DVT Improve Staking on Ethereum?
+
+If you haven’t yet heard, Distributed Validator Technology, or DVT, is the next big thing on The Merge section of the Ethereum roadmap. Learn more about this in our blog post: [What is DVT and How Does It Improve Staking on Ethereum?](https://blog.obol.tech/what-is-dvt-and-how-does-it-improve-staking-on-ethereum/)
+
+
+
+_**Vitalik's Ethereum Roadmap**_
+
+### Deep Dive Into DVT and Charon’s Architecture
+
+Minimizing correlation is vital when designing DVT as Ethereum Proof of Stake is designed to heavily punish correlated behavior. In designing Obol, we’ve made careful choices to create a trust-minimized and non-correlated architecture.
+
+[**Read more about Designing Non-Correlation Here**](https://blog.obol.tech/deep-dive-into-dvt-and-charons-architecture/)
+
+### Performance Testing Distributed Validators
+
+In our mission to help make Ethereum consensus more resilient and decentralised with distributed validators (DVs), it’s critical that we do not compromise on the performance and effectiveness of validators. Earlier this year, we worked with MigaLabs, the blockchain ecosystem observatory located in Barcelona, to perform an independent test to validate the performance of Obol DVs under different configurations and conditions. After taking a few weeks to fully analyse the results together with MigaLabs, we’re happy to share the results of these performance tests.
+
+[**Read More About The Performance Test Results Here**](https://blog.obol.tech/performance-testing-distributed-validators/)
+
+
+
+### More Resources
+
+* [Sorting out Distributed Validator Technology](https://medium.com/nethermind-eth/sorting-out-distributed-validator-technology-a6f8ca1bbce3)
+* [A tour of Verifiable Secret Sharing schemes and Distributed Key Generation protocols](https://medium.com/nethermind-eth/a-tour-of-verifiable-secret-sharing-schemes-and-distributed-key-generation-protocols-3c814e0d47e1)
+* [Threshold Signature Schemes](https://medium.com/nethermind-eth/threshold-signature-schemes-36f40bc42aca)
+
+#### References
+
+* ethereum.org. (2023). Distributed Validator Technology. \[online] Available at: https://ethereum.org/en/staking/dvt/ \[Accessed 25 Sep. 2023].
diff --git a/docs/versioned_docs/version-v1.0.0/fr/testnet.md b/docs/versioned_docs/version-v1.0.0/fr/testnet.md
new file mode 100644
index 0000000000..30ea90ed1b
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/fr/testnet.md
@@ -0,0 +1,120 @@
+---
+sidebar_position: 5
+description: Community testing efforts
+---
+
+# Community Testing
+
+:::tip
+This page looks at the community testing efforts organised by Obol to test Distributed Validators at scale. If you are looking for guides to run a Distributed Validator on testnet you can do so [here](../start/quickstart_overview.md).
+:::
+
+Over the last number of years, Obol Labs has coordinated and hosted a number of progressively larger testnets to help harden the Charon client and iterate on the key generation tooling.
+
+The following is a breakdown of the testnet roadmap, the features that were to be completed by each testnet, and their completion date and duration.
+
+# Testnets
+
+- [x] [Dev Net 1](#devnet-1)
+- [x] [Dev Net 2](#devnet-2)
+- [x] [Athena Public Testnet 1](#athena-public-testnet-1)
+- [x] [Bia Public Testnet 2](#bia-public-testnet-2)
+
+## Devnet 1
+
+The first devnet aimed to have a number of trusted operators test out our earliest tutorial flows. The aim was for a single user to complete these tutorials alone, using `docker compose` to spin up 4 Charon clients and 4 different validator clients on a single machine, with a remote consensus client. The keys were created locally in Charon and activated with the existing launchpad.
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** June 2022
+
+**Duration:** 1 week
+
+**Goals:**
+
+- A single user completes a first tutorial alone, using `docker compose` to spin up 4 Charon clients on a single machine, with a remote consensus client. The keys are created locally in Charon and activated with the existing launchpad.
+- Prove that the distributed validator paradigm with 4 separate VC implementations together operating as one logical validator works.
+- Get the basics of monitoring in place, for the following testnet where accurate monitoring will be important due to Charon running across a network.
+
+## Devnet 2
+
+The second devnet aimed to have a number of trusted operators test out our earliest tutorial flows **together** for the first time.
+
+The aim was for groups of 4 testers to complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients and 4 different validator clients (Lighthouse, Teku, Lodestar and Vouch), each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+
+This devnet was the first time `charon dkg` was tested with users. A core focus of this devnet was to collect network performance data.
+
+This was also the first time Charon was run in variable, non-virtual networks (i.e. the real internet).
+
+**Participants:** Obol Dev Team, Client team advisors.
+
+**State:** Pre-product
+
+**Network:** Kiln
+
+**Completed Date:** July 2022
+
+**Duration:** 2 weeks
+
+**Goals:**
+
+- Groups of 4 testers complete a group onboarding tutorial, using `docker compose` to spin up 4 Charon clients, each on their own machine at each operator's home or their place of choosing, running at least a kiln consensus client.
+- Operators avoid exposing Charon to the public internet on a static IP address through the use of Obol-hosted relay nodes.
+- Users test `charon dkg`. The launchpad is not used, and this dkg is triggered by a manifest config file created locally by a single operator using the `charon create dkg` command.
+- Effective collection of network performance data, to enable gathering even higher signal performance data at scale during public testnets.
+- Block proposals are in place.
+
+## Athena Public Testnet 1
+
+With tutorials for solo and group flows having been developed and refined. The goal for public testnet 1 was to get distributed validators into the hands of the wider Obol Community for the first time. The core focus of this testnet was the onboarding experience.
+
+The core output from this testnet was a significant number of public cluster running and public feedback collected.
+
+This was an unincentivized testnet and formed the basis for us to figure out a Sybil resistance mechanism.
+
+**Participants:** Obol Community
+
+**State:** Bare Minimum
+
+**Network:** Görli
+
+**Completed date:** October 2022
+
+**Duration:** 2 weeks cluster setup, 8 weeks operation
+
+**Goals:**
+
+- Get distributed validators into the hands of the Obol Early Community for the first time.
+- Create the first public onboarding experience and gather feedback. This is the first time we need to provide comprehensive instructions for as many platforms (Unix, Mac, Windows) as possible.
+- Make deploying Ethereum validator nodes accessible using the CLI.
+- Generate a backlog of bugs, feature requests, platform requests and integration requests.
+
+## Bia Public Testnet 2
+
+This second public testnet intends to take the learning from Athena and scale the network by engaging both the wider at-home validator community and professional operators. This is the first time users are setting up DVs using the DV launchpad.
+
+This testnet is also important for learning the conditions Charon will be subjected to in production. A core output of this testnet is a large number of autonomous public DV clusters running and building up the Obol community with technical ambassadors.
+
+**Participants:** Obol Community, Ethereum staking community
+
+**State:** MVP
+
+**Network:** Görli
+
+**Completed date:** March 2023
+
+**Duration:** 2 weeks cluster setup, 4-8 weeks operation
+
+**Goals:**
+
+- Engage the wider Solo and Professional Ethereum Staking Community.
+- Get integration feedback.
+- Build confidence in Charon after running DVs on an Ethereum testnet.
+- Learn about the conditions Charon will be subjected to in production.
+- Distributed Validator returns are competitive versus single validator clients.
+- Make deploying Ethereum validator nodes accessible using the DV Launchpad.
+- Build comprehensive guides for various profiles to spin up DVs with minimal supervision from the core team.
diff --git a/docs/versioned_docs/version-v1.0.0/int/README.md b/docs/versioned_docs/version-v1.0.0/int/README.md
new file mode 100644
index 0000000000..590c7a8a3d
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/int/README.md
@@ -0,0 +1,2 @@
+# int
+
diff --git a/docs/versioned_docs/version-v1.0.0/int/key-concepts.md b/docs/versioned_docs/version-v1.0.0/int/key-concepts.md
new file mode 100644
index 0000000000..9ca7868592
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/int/key-concepts.md
@@ -0,0 +1,110 @@
+---
+sidebar_position: 2
+description: Some of the key terms in the field of Distributed Validator Technology
+---
+
+# Key concepts
+
+This page outlines a number of the key concepts behind the various technologies that Obol is developing.
+
+## Distributed validator
+
+
+
+A distributed validator is an Ethereum proof-of-stake validator that runs on more than one node/machine. This functionality is possible with the use of **Distributed Validator Technology** (DVT).
+
+Distributed validator technology removes some of the single points of failure in validation. Should <33% of the participating nodes in a DV cluster go offline, the remaining active nodes can still come to consensus on what to sign and can produce valid signatures for their staking duties. This is known as Active/Active redundancy, a common pattern for minimizing downtime in mission critical systems.
+
+## Distributed Validator Node
+
+
+
+A distributed validator node is the set of clients an operator needs to configure and run to fulfil the duties of a Distributed Validator Operator. An operator may also run redundant execution and consensus clients, an execution payload relayer like [mev-boost](https://github.com/flashbots/mev-boost), or other monitoring or telemetry services on the same hardware to ensure optimal performance.
+
+In the above example, the stack includes Geth, Lighthouse, Charon and Teku.
+
+### Execution Client
+
+
+
+An execution client (formerly known as an Eth1 client) specializes in running the EVM and managing the transaction pool for the Ethereum network. These clients provide execution payloads to consensus clients for inclusion into blocks.
+
+Examples of execution clients include:
+
+* [Go-Ethereum](https://geth.ethereum.org/)
+* [Nethermind](https://docs.nethermind.io/)
+* [Erigon](https://github.com/ledgerwatch/erigon)
+
+### Consensus Client
+
+
+
+A consensus client's duty is to run the proof of stake consensus layer of Ethereum, often referred to as the beacon chain.
+
+Examples of Consensus clients include:
+
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/beacon-node)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-bn.html)
+* [Nimbus](https://nimbus.guide/)
+* [Lodestar](https://github.com/ChainSafe/lodestar)
+
+### Distributed Validator Client
+
+
+
+A distributed validator client intercepts the validator client ↔ consensus client communication flow over the [standardised REST API](https://ethereum.github.io/beacon-APIs/#/ValidatorRequiredApi), and focuses on two core duties:
+
+* Coming to consensus on a candidate duty for all validators to sign.
+* Combining signatures from all validators into a distributed validator signature.
+
+The only example of a distributed validator client built with a non-custodial middleware architecture to date is [Charon](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/charon/intro/README.md).
+
+### Validator Client
+
+
+
+A validator client is a piece of code that operates one or more Ethereum validators.
+
+Examples of validator clients include:
+
+* [Vouch](https://www.attestant.io/posts/introducing-vouch/)
+* [Prysm](https://docs.prylabs.network/docs/how-prysm-works/prysm-validator-client/)
+* [Teku](https://docs.teku.consensys.net/en/stable/)
+* [Lighthouse](https://lighthouse-book.sigmaprime.io/api-vc.html)
+
+## Distributed Validator Cluster
+
+
+
+A distributed validator cluster is a collection of distributed validator nodes connected together to service a set of distributed validators generated during a DVK ceremony.
+
+### Distributed Validator Key
+
+
+
+A distributed validator key is a group of BLS private keys, that together operate as a threshold key for participating in proof of stake consensus.
+
+### Distributed Validator Key Share
+
+One piece of the distributed validator private key.
+
+### Distributed Validator Threshold
+
+The number of nodes in a cluster that needs to be online and honest for their distributed validators to be online is outlined in the following table.
+
+| Cluster Size | Threshold | Note |
+| :----------: | :-------: | --------------------------------------------- |
+| 4 | 3/4 | Minimum threshold |
+| 5 | 4/5 | |
+| 6 | 4/6 | Minimum to tolerate two offline nodes |
+| 7 | 5/7 | Minimum to tolerate two **malicious** nodes |
+| 8 | 6/8 | |
+| 9 | 6/9 | Minimum to tolerate three offline nodes |
+| 10 | 7/10 | Minimum to tolerate three **malicious** nodes |
+
+### Distributed Validator Key Generation Ceremony
+
+To achieve fault tolerance in a distributed validator, the individual private key shares need to be generated together. Rather than have a trusted dealer produce a private key, split it and distribute it, the preferred approach is to never construct the full private key at any point, by having each operator in the distributed validator cluster participate in what is known as a Distributed Key Generation ceremony.
+
+A distributed validator key generation ceremony is a type of DKG ceremony. A ceremony produces signed validator deposit and exit data, along with all of the validator key shares and their associated metadata. Read more about these ceremonies [here](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/charon/dkg/README.md).
diff --git a/docs/versioned_docs/version-v1.0.0/int/overview.md b/docs/versioned_docs/version-v1.0.0/int/overview.md
new file mode 100644
index 0000000000..74b644091f
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/int/overview.md
@@ -0,0 +1,55 @@
+---
+sidebar_position: 1
+description: An overview of the Obol network
+---
+
+# Overview of Obol
+
+## What is Obol?
+
+Obol Labs is a research and software development team focused on proof-of-stake infrastructure for public blockchain networks. Specific topics of focus are Internet Bonds, Distributed Validator Technology and Multi-Operator Validation. The team currently includes 35 members that are spread across the world.
+
+The core team is building the Distributed Validator Protocol, a protocol to foster trust minimized staking through multi-operator validation. This will enable low-trust access to Ethereum staking yield, which can be used as a core building block in a variety of Web3 products.
+
+## The Network
+
+The network can be best visualized as a work layer that sits directly on top of base layer consensus. This work layer is designed to provide the base layer with more resiliency and promote decentralization as it scales. As Ethereum matures over the coming years, the community will move onto the next great scaling challenge, which is stake centralization. To effectively mitigate these risks, community building and credible neutrality must be used as primary design principles.
+
+Obol is focused on scaling consensus by providing permissionless access to Distributed Validators (DVs). We believe that distributed validators will and should make up a large portion of mainnet validator configurations. In preparation for the first wave of adoption, the network currently utilizes a middleware implementation of Distributed Validator Technology (DVT), to enable the operation of distributed validator clusters that can preserve validators current client and remote signing infrastructure.
+
+Similar to how roll-up technology laid the foundation for L2 scaling implementations, we believe DVT will do the same for scaling consensus while preserving decentralization. Staking infrastructure is entering its protocol phase of evolution, which must include trust-minimized staking middlewares that can be adopted at scale. Layers like Obol are critical to the long term viability and resiliency of public networks, especially networks like Ethereum. We believe DVT will evolve into a widely used primitive and will ensure the security, resiliency, and decentralization of the public blockchain networks that adopt it.
+
+The Obol Network consists of four core public goods:
+
+* The [Distributed Validator Launchpad](../dvl/intro.md), a user interface for bootstrapping Distributed Validators;
+* [Charon](../charon/intro.md), a middleware client that enables validators to run in a fault-tolerant, distributed manner;
+* [Obol Splits](../sc/introducing-obol-splits.md), a set of solidity smart contracts for the distribution of rewards from Distributed Validators;
+* [Obol Testnets](../fr/testnet.md), distributed validator infrastructure for Ethereum public test networks, to enable any sized operator to test their deployment before running Distributed Validators on mainnet.
+
+### Sustainable Public Goods
+
+The Obol Ecosystem is inspired by previous work on Ethereum public goods and experimenting with circular economics. We believe that to unlock innovation in staking use cases, a credibly neutral layer must exist for innovation to flow and evolve vertically. Without this layer highly available uptime will continue to be a moat and stake will accumulate amongst a few products.
+
+The Obol Network will become an open, community governed, self-sustaining project over the coming months and years. Together we will incentivize, build, and maintain distributed validator technology that makes public networks a more secure and resilient foundation to build on top of.
+
+## The Vision
+
+The road to decentralizing stake is a long one. At Obol we have divided our vision into two key versions of distributed validators.
+
+### V1 - Trusted Distributed Validators
+
+
+
+The first version of distributed validators will have dispute resolution out of band. Meaning you need to know and communicate with your counterparty operators if there is an issue with your shared cluster.
+
+A DV without in-band dispute resolution/incentivisation is still extremely valuable. Individuals and staking as a service providers can deploy DVs on their own to make their validators fault tolerant. Groups can run DVs together, but need to bring their own dispute resolution to the table, whether that be a smart contract of their own, a traditional legal service agreement, or simply high trust between the group.
+
+Obol V1 will utilize retroactive public goods principles to lay the foundation of its economic ecosystem. The Obol Community will responsibly allocate the collected ETH as grants to projects in the staking ecosystem for the entirety of V1.
+
+### V2 - Trustless Distributed Validators
+
+V1 of Charon serves a small by count, large by stake-weight group of individuals. The long tail of home and small stakers also deserve to have access to fault tolerant validation, but they may not know enough other operators personally to a sufficient level of trust to run a DV cluster together.
+
+Version 2 of Charon will layer in an incentivisation scheme to ensure any operator not online and taking part in validation is not earning any rewards. Further incentivisation alignment can be achieved with operator bonding requirements that can be slashed for unacceptable performance.
+
+To add an un-gameable incentivisation layer to threshold validation requires complex interactive cryptography schemes, secure off-chain dispute resolution, and EVM support for proofs of the consensus layer state, as a result, this will be a longer and more complex undertaking than V1, hence the delineation between the two.
diff --git a/docs/versioned_docs/version-v1.0.0/sc/README.md b/docs/versioned_docs/version-v1.0.0/sc/README.md
new file mode 100644
index 0000000000..a56cacadf8
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sc/README.md
@@ -0,0 +1,2 @@
+# sc
+
diff --git a/docs/versioned_docs/version-v1.0.0/sc/introducing-obol-splits.md b/docs/versioned_docs/version-v1.0.0/sc/introducing-obol-splits.md
new file mode 100644
index 0000000000..9ff8c1fe35
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sc/introducing-obol-splits.md
@@ -0,0 +1,91 @@
+---
+sidebar_position: 1
+description: Smart contracts for managing Distributed Validators
+---
+
+# Obol Splits
+
+Obol develops and maintains a suite of smart contracts for use with Distributed Validators. These contracts include:
+
+* Withdrawal Recipients: Contracts used for a validator's withdrawal address.
+* Split contracts: Contracts to split ether across multiple entities. Developed by [Splits.org](https://splits.org)
+* Split controllers: Contracts that can mutate a splitter's configuration.
+
+Two key goals of validator reward management are:
+
+1. To be able to differentiate reward ether from principal ether such that node operators can be paid a percentage the _reward_ they accrue for the principal provider rather than a percentage of _principal+reward_.
+2. To be able to withdraw the rewards in an ongoing manner without exiting the validator.
+
+Without access to the consensus layer state in the EVM to check a validator's status or balance, and due to the incoming ether being from an irregular state transition, neither of these requirements are easily satisfiable.
+
+The following sections outline different contracts that can be composed to form a solution for one or both goals.
+
+## Withdrawal Recipients
+
+Validators have two streams of revenue, the consensus layer rewards and the execution layer rewards. Withdrawal Recipients focus on the former, receiving the balance skimming from a validator with >32 ether in an ongoing manner, and receiving the principal of the validator upon exit.
+
+### Optimistic Withdrawal Recipient
+
+\ 
+
+This is the primary withdrawal recipient Obol uses, as it allows for the separation of reward from principal, as well as permitting the ongoing withdrawal of accruing rewards.
+
+An Optimistic Withdrawal Recipient [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipient.sol) takes three inputs when deployed:
+
+* A _principal_ address: The address that controls where the principal ether will be transferred post-exit.
+* A _reward_ address: The address where the accruing reward ether is transferred to.
+* The amount of ether that makes up the principal.
+
+This contract **assumes that any ether that has appeared in it's address since it was last able to do balance accounting is skimming reward from an ongoing validator** (or number of validators) unless the change is > 16 ether. This means balance skimming is immediately claimable as reward, while an inflow of e.g. 31 ether is tracked as a return of principal (despite being slashed in this example).
+
+:::warning
+
+Worst-case mass slashings can theoretically exceed 16 ether, if this were to occur, the returned principal would be misclassified as a reward, and distributed to the wrong address. This risk is the drawback that makes this contract variant 'optimistic'. If you intend to use this contract type, **it is important you understand and accept this risk**, however minute.
+
+The alternative is to use an splits.org [waterfall contract](https://docs.splits.org/core/waterfall), which won't allow the claiming of rewards until all principal ether has been returned, meaning validators need to be exited for operators to claim their CL rewards.
+
+:::
+
+This contract fits both design goals and can be used with thousands of validators. It is safe to deploy an Optimistic Withdrawal Recipient with a principal higher than you actually end up using, though you should process the accrued rewards before exiting a validator or the reward recipients will be short-changed as that balance may be counted as principal instead of reward the next time the contract is updated. If you activate more validators than you specified in your contract deployment, you will record too much ether as reward and will overpay your reward address with ether that was principal ether, not earned ether. Current iterations of this contract are not designed for editing the amount of principal set.
+
+#### OWR Factory Deployment
+
+The OptimisticWithdrawalRecipient contract is deployed via a [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/owr/OptimisticWithdrawalRecipientFactory.sol). The factory is deployed at the following addresses on the following chains.
+
+| Chain | Address |
+| ------- | ----------------------------------------------------------------------------------------------------------------------------- |
+| Mainnet | [0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522](https://etherscan.io/address/0x119acd7844cbdd5fc09b1c6a4408f490c8f7f522) |
+| Goerli | [0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26](https://goerli.etherscan.io/address/0xe9557FCC055c89515AE9F3A4B1238575Fcd80c26) |
+| Holesky | [0x7fec4add6b5ee2b6c1cba232bc6db754794cb6df](https://holesky.etherscan.io/address/0x7fec4add6b5ee2b6c1cba232bc6db754794cb6df) |
+| Sepolia | [0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a](https://sepolia.etherscan.io/address/0xca78f8fda7ec13ae246e4d4cd38b9ce25a12e64a) |
+
+### Exitable Withdrawal Recipient
+
+A much awaited feature for proof of stake Ethereum is the ability to trigger the exit of a validator with only the withdrawal address. This is tracked in [EIP-7002](https://eips.ethereum.org/EIPS/eip-7002). Support for this feature will be inheritable in all other withdrawal recipient contracts. This will mitigate the risk to a principal provider of funds being stuck, or a validator being irrecoverably offline.
+
+## Split Contracts
+
+A split, or splitter, is a set of contracts that can divide ether or an ERC20 across a number of addresses. Splits are often used in conjunction with withdrawal recipients. Execution Layer rewards for a DV are directed to a split address through the use of a `fee recipient` address. Splits can be either immutable, or mutable by way of an admin address capable of updating them.
+
+Further information about splits can be found on the splits.org team's [docs site](https://docs.splits.org/). The addresses of their deployments can be found [here](https://docs.splits.org/core/split#addresses).
+
+## Split Controllers
+
+Splits can be completely edited through the use of the `controller` address, however, total editability of a split is not always wanted. A permissive controller and a restrictive controller are given as examples below.
+
+### (Gnosis) SAFE wallet
+
+A [SAFE](https://safe.global/) is a common method to administrate a mutable split. The most well-known deployment of this pattern is the [protocol guild](https://protocol-guild.readthedocs.io/en/latest/3-smart-contract.html). The SAFE can arbitrarily update the split to any set of addresses with any valid set of percentages.
+
+### Immutable Split Controller
+
+This is a [contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitController.sol) that updates one split configuration with another, exactly once. Only a permissioned address can trigger the change. This contract is suitable for changing a split at an unknown point in future to a configuration pre-defined at deployment.
+
+The Immutable Split Controller [factory contract](https://github.com/ObolNetwork/obol-splits/blob/main/src/controllers/ImmutableSplitControllerFactory.sol) can be found at the following addresses:
+
+| Chain | Address |
+| ------- | ---------------------------------------------------------------------------------------------------------------------------- |
+| Mainnet | [0x49e7cA187F1E94d9A0d1DFBd6CCCd69Ca17F56a4](https://etherscan.io/address/0x49e7cA187F1E94d9A0d1DFBd6CCCd69Ca17F56a4) |
+| Goerli | [0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f](https://goerli.etherscan.io/address/0x64a2c4A50B1f46c3e2bF753CFe270ceB18b5e18f) |
+| Holesky | |
+| Sepolia | |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/README.md b/docs/versioned_docs/version-v1.0.0/sdk/README.md
new file mode 100644
index 0000000000..1bcfa0dc3d
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/README.md
@@ -0,0 +1,2 @@
+# sdk
+
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/classes/README.md b/docs/versioned_docs/version-v1.0.0/sdk/classes/README.md
new file mode 100644
index 0000000000..46d80f843a
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/classes/README.md
@@ -0,0 +1,2 @@
+# classes
+
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/classes/client.md b/docs/versioned_docs/version-v1.0.0/sdk/classes/client.md
new file mode 100644
index 0000000000..1fe162c80a
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/classes/client.md
@@ -0,0 +1,179 @@
+# Client
+
+Obol sdk Client can be used for creating, managing and activating distributed validators.
+
+### Extends
+
+* `Base`
+
+### Constructors
+
+#### new Client()
+
+> **new Client**(`config`, `signer`?): [`Client`](client.md)
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ----------------- | -------- | --------------------- |
+| `config` | `object` | Client configurations |
+| `config.baseUrl`? | `string` | obol-api url |
+| `config.chainId`? | `number` | Blockchain network ID |
+| `signer`? | `Signer` | ethersJS Signer |
+
+**Returns**
+
+[`Client`](client.md)
+
+Obol-SDK Client instance
+
+An example of how to instantiate obol-sdk Client: [obolClient](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L29)
+
+**Overrides**
+
+`Base.constructor`
+
+**Source**
+
+index.ts:45
+
+### Methods
+
+#### acceptObolLatestTermsAndConditions()
+
+> **acceptObolLatestTermsAndConditions**(): `Promise`<`string`>
+
+Accepts Obol terms and conditions to be able to create or update data.
+
+**Returns**
+
+`Promise`<`string`>
+
+terms and conditions acceptance success message.
+
+**Throws**
+
+On unverified signature or wrong hash.
+
+An example of how to use acceptObolLatestTermsAndConditions: [acceptObolLatestTermsAndConditions](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L44)
+
+**Source**
+
+index.ts:59
+
+***
+
+#### createClusterDefinition()
+
+> **createClusterDefinition**(`newCluster`): `Promise`<`string`>
+
+Creates a cluster definition which contains cluster configuration.
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ------------ | --------------------------------------------------- | ----------------------- |
+| `newCluster` | [`ClusterPayload`](../interfaces/clusterpayload.md) | The new unique cluster. |
+
+**Returns**
+
+`Promise`<`string`>
+
+config\_hash.
+
+**Throws**
+
+On duplicate entries, missing or wrong cluster keys.
+
+An example of how to use createClusterDefinition: [createObolCluster](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L59)
+
+**Source**
+
+index.ts:105
+
+***
+
+#### acceptClusterDefinition()
+
+> **acceptClusterDefinition**(`operatorPayload`, `configHash`): `Promise` <[`ClusterDefinition`](../interfaces/clusterdefinition.md)>
+
+Approves joining a cluster with specific configuration.
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ----------------- | ------------------------------------------------------- | ---------------------------------------------------------------------- |
+| `operatorPayload` | [`OperatorPayload`](../type-aliases/operatorpayload.md) | The operator data including signatures. |
+| `configHash` | `string` | The config hash of the cluster which the operator confirms joining to. |
+
+**Returns**
+
+`Promise` <[`ClusterDefinition`](../interfaces/clusterdefinition.md)>
+
+The cluster definition.
+
+**Throws**
+
+On unauthorized, duplicate entries, missing keys, not found cluster or invalid data.
+
+An example of how to use acceptClusterDefinition: [acceptClusterDefinition](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L106)
+
+**Source**
+
+index.ts:163
+
+***
+
+#### getClusterDefinition()
+
+> **getClusterDefinition**(`configHash`): `Promise` <[`ClusterDefinition`](../interfaces/clusterdefinition.md)>
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ------------ | -------- | ---------------------------------------------------------- |
+| `configHash` | `string` | The configuration hash returned in createClusterDefinition |
+
+**Returns**
+
+`Promise` <[`ClusterDefinition`](../interfaces/clusterdefinition.md)>
+
+The cluster definition for config hash
+
+**Throws**
+
+On not found config hash.
+
+An example of how to use getClusterDefinition: [getObolClusterDefinition](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L74)
+
+**Source**
+
+index.ts:215
+
+***
+
+#### getClusterLock()
+
+> **getClusterLock**(`configHash`): `Promise` <[`ClusterLock`](../interfaces/clusterlock.md)>
+
+**Parameters**
+
+| Parameter | Type | Description |
+| ------------ | -------- | -------------------------------------------- |
+| `configHash` | `string` | The configuration hash in cluster-definition |
+
+**Returns**
+
+`Promise` <[`ClusterLock`](../interfaces/clusterlock.md)>
+
+The matched cluster details (lock) from DB
+
+**Throws**
+
+On not found cluster definition or lock.
+
+An example of how to use getClusterLock: [getObolClusterLock](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L89)
+
+**Source**
+
+index.ts:234
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/enumerations/README.md b/docs/versioned_docs/version-v1.0.0/sdk/enumerations/README.md
new file mode 100644
index 0000000000..ec74a1ba13
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/enumerations/README.md
@@ -0,0 +1,2 @@
+# enumerations
+
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/enumerations/fork_mapping.md b/docs/versioned_docs/version-v1.0.0/sdk/enumerations/fork_mapping.md
new file mode 100644
index 0000000000..2af793e39d
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/enumerations/fork_mapping.md
@@ -0,0 +1,10 @@
+Permitted ChainID's
+
+## Enumeration Members
+
+| Enumeration Member | Value | Description |
+| :------ | :------ | :------ |
+| `0x00000000` | `1` | Mainnet. |
+| `0x00001020` | `5` | Goerli/Prater. |
+| `0x00000064` | `100` | Gnosis Chain. |
+| `0x01017000` | `17000` | Holesky. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/functions/README.md b/docs/versioned_docs/version-v1.0.0/sdk/functions/README.md
new file mode 100644
index 0000000000..35b3fffdd7
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/functions/README.md
@@ -0,0 +1,2 @@
+# functions
+
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/functions/validateclusterlock.md b/docs/versioned_docs/version-v1.0.0/sdk/functions/validateclusterlock.md
new file mode 100644
index 0000000000..47f41e1e8c
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/functions/validateclusterlock.md
@@ -0,0 +1,27 @@
+# validateClusterLock
+
+> **validateClusterLock**(`lock`): `Promise`<`boolean`>
+
+Verifies Cluster Lock's validity.
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | --------------------------------------------- | ------------ |
+| `lock` | [`ClusterLock`](../interfaces/clusterlock.md) | cluster lock |
+
+### Returns
+
+`Promise`<`boolean`>
+
+boolean result to indicate if lock is valid
+
+### Throws
+
+on missing keys or values.
+
+An example of how to use validateClusterLock: [validateClusterLock](https://github.com/ObolNetwork/obol-sdk-examples/blob/main/TS-Example/index.ts#L127)
+
+### Source
+
+services.ts:13
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/index.md b/docs/versioned_docs/version-v1.0.0/sdk/index.md
new file mode 100644
index 0000000000..ff2764ec43
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/index.md
@@ -0,0 +1,44 @@
+---
+hide_title: true
+---
+
+# index
+
+
+
+## Obol SDK
+
+This repo contains the Obol Software Development Kit, for creating Distributed Validators with the help of the [Obol API](https://docs.obol.tech/api).
+
+### Getting Started
+
+Checkout our [docs](https://docs.obol.tech/docs/advanced/quickstart-sdk), [examples](https://github.com/ObolNetwork/obol-sdk-examples/), and SDK [reference](https://obolnetwork.github.io/obol-packages). Further guides and walkthroughs coming soon.
+
+### Enumerations
+
+* [FORK\_MAPPING](enumerations/fork_mapping.md)
+
+### Classes
+
+* [Client](classes/client.md)
+
+### Interfaces
+
+* [ClusterOperator](interfaces/clusteroperator.md)
+* [ClusterCreator](interfaces/clustercreator.md)
+* [ClusterValidator](interfaces/clustervalidator.md)
+* [ClusterPayload](interfaces/clusterpayload.md)
+* [ClusterDefinition](interfaces/clusterdefinition.md)
+* [BuilderRegistrationMessage](interfaces/builderregistrationmessage.md)
+* [BuilderRegistration](interfaces/builderregistration.md)
+* [DepositData](interfaces/depositdata.md)
+* [DistributedValidator](interfaces/distributedvalidator.md)
+* [ClusterLock](interfaces/clusterlock.md)
+
+### Type Aliases
+
+* [OperatorPayload](type-aliases/operatorpayload.md)
+
+### Functions
+
+* [validateClusterLock](functions/validateclusterlock.md)
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/README.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/README.md
new file mode 100644
index 0000000000..95109455d3
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/README.md
@@ -0,0 +1,2 @@
+# interfaces
+
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/builderregistration.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/builderregistration.md
new file mode 100644
index 0000000000..fff24a3c90
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/builderregistration.md
@@ -0,0 +1,10 @@
+# BuilderRegistration
+
+Pre-generated Signed Validator Builder Registration
+
+### Properties
+
+| Property | Type | Description |
+| ----------- | ------------------------------------------------------------- | -------------------------------------------------- |
+| `message` | [`BuilderRegistrationMessage`](builderregistrationmessage.md) | Builder registration message. |
+| `signature` | `string` | BLS signature of the builder registration message. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/builderregistrationmessage.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/builderregistrationmessage.md
new file mode 100644
index 0000000000..45e41f87d8
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/builderregistrationmessage.md
@@ -0,0 +1,10 @@
+Unsigned DV Builder Registration Message
+
+## Properties
+
+| Property | Type | Description |
+| :------ | :------ | :------ |
+| `fee_recipient` | `string` | The DV fee recipient. |
+| `gas_limit` | `number` | Default is 30000000. |
+| `timestamp` | `number` | Timestamp when generating cluster lock file. |
+| `pubkey` | `string` | The public key of the DV. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clustercreator.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clustercreator.md
new file mode 100644
index 0000000000..bf9ef17380
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clustercreator.md
@@ -0,0 +1,8 @@
+Cluster creator data
+
+## Properties
+
+| Property | Type | Description |
+| :------ | :------ | :------ |
+| `address` | `string` | The creator address. |
+| `config_signature?` | `string` | The cluster configuration signature. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterdefinition.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterdefinition.md
new file mode 100644
index 0000000000..f0607e588b
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterdefinition.md
@@ -0,0 +1,26 @@
+# ClusterDefinition
+
+Cluster definition data needed for dkg
+
+### Extends
+
+* [`ClusterPayload`](clusterpayload.md)
+
+### Properties
+
+| Property | Type | Description | Overrides | Inherited from |
+| ------------------ | -------------------------------------------- | ---------------------------------------------------- | ------------------------------------------------------- | ------------------------------------------------------- |
+| `name` | `string` | The cluster name. | [`ClusterPayload`](clusterpayload.md).`name` | [`ClusterPayload`](clusterpayload.md).`name` |
+| `operators` | [`ClusterOperator`](clusteroperator.md)\[] | The cluster nodes operators addresses. | [`ClusterPayload`](clusterpayload.md).`operators` | [`ClusterPayload`](clusterpayload.md).`operators` |
+| `validators` | [`ClusterValidator`](clustervalidator.md)\[] | The cluster validators information. | [`ClusterPayload`](clusterpayload.md).`validators` | [`ClusterPayload`](clusterpayload.md).`validators` |
+| `creator` | [`ClusterCreator`](clustercreator.md) | The creator of the cluster. | - | - |
+| `version` | `string` | The cluster configuration version. | - | - |
+| `dkg_algorithm` | `string` | The cluster dkg algorithm. | - | - |
+| `fork_version` | `string` | The cluster fork version. | - | - |
+| `uuid` | `string` | The cluster uuid. | - | - |
+| `timestamp` | `string` | The cluster creation timestamp. | - | - |
+| `config_hash` | `string` | The cluster configuration hash. | - | - |
+| `threshold` | `number` | The distributed validator threshold. | - | - |
+| `num_validators` | `number` | The number of distributed validators in the cluster. | - | - |
+| `deposit_amounts?` | `string`\[] | The cluster partial deposits in gwei or 32000000000. | [`ClusterPayload`](clusterpayload.md).`deposit_amounts` | [`ClusterPayload`](clusterpayload.md).`deposit_amounts` |
+| `definition_hash?` | `string` | The hash of the cluster definition. | - | - |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterlock.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterlock.md
new file mode 100644
index 0000000000..6c18632825
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterlock.md
@@ -0,0 +1,13 @@
+# ClusterLock
+
+Cluster Details after DKG is complete
+
+### Properties
+
+| Property | Type | Description |
+| ------------------------ | ---------------------------------------------------- | ----------------------------------------------------------- |
+| `cluster_definition` | [`ClusterDefinition`](clusterdefinition.md) | The cluster definition. |
+| `distributed_validators` | [`DistributedValidator`](distributedvalidator.md)\[] | The cluster distributed validators. |
+| `signature_aggregate` | `string` | The cluster bls signature aggregate. |
+| `lock_hash` | `string` | The hash of the cluster lock. |
+| `node_signatures?` | `string`\[] | Node Signature for the lock hash by the node secp256k1 key. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusteroperator.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusteroperator.md
new file mode 100644
index 0000000000..8c637caaa6
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusteroperator.md
@@ -0,0 +1,12 @@
+Node operator data
+
+## Properties
+
+| Property | Type | Description |
+| :------ | :------ | :------ |
+| `address` | `string` | The operator address. |
+| `enr?` | `string` | The operator ethereum node record. |
+| `fork_version?` | `string` | The cluster fork_version. |
+| `version?` | `string` | The cluster version. |
+| `enr_signature?` | `string` | The operator enr signature. |
+| `config_signature?` | `string` | The operator configuration signature. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterpayload.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterpayload.md
new file mode 100644
index 0000000000..d776e67c76
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clusterpayload.md
@@ -0,0 +1,16 @@
+# ClusterPayload
+
+Cluster configuration
+
+### Extended by
+
+* [`ClusterDefinition`](clusterdefinition.md)
+
+### Properties
+
+| Property | Type | Description |
+| ------------------ | -------------------------------------------- | ---------------------------------------------------- |
+| `name` | `string` | The cluster name. |
+| `operators` | [`ClusterOperator`](clusteroperator.md)\[] | The cluster nodes operators addresses. |
+| `validators` | [`ClusterValidator`](clustervalidator.md)\[] | The cluster validators information. |
+| `deposit_amounts?` | `string`\[] | The cluster partial deposits in gwei or 32000000000. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clustervalidator.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clustervalidator.md
new file mode 100644
index 0000000000..52a500460f
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/clustervalidator.md
@@ -0,0 +1,8 @@
+Validator withdrawal configuration
+
+## Properties
+
+| Property | Type | Description |
+| :------ | :------ | :------ |
+| `fee_recipient_address` | `string` | The validator fee recipient address. |
+| `withdrawal_address` | `string` | The validator reward address. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/depositdata.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/depositdata.md
new file mode 100644
index 0000000000..cd3e6b5756
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/depositdata.md
@@ -0,0 +1,11 @@
+Required deposit data for validator activation
+
+## Properties
+
+| Property | Type | Description |
+| :------ | :------ | :------ |
+| `pubkey` | `string` | The public key of the distributed validator. |
+| `withdrawal_credentials` | `string` | The 0x01 withdrawal address of the DV. |
+| `amount` | `string` | 32 ethers. |
+| `deposit_data_root` | `string` | A checksum for DepositData fields . |
+| `signature` | `string` | BLS signature of the deposit message. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/interfaces/distributedvalidator.md b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/distributedvalidator.md
new file mode 100644
index 0000000000..7d6d33942b
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/interfaces/distributedvalidator.md
@@ -0,0 +1,13 @@
+# DistributedValidator
+
+Required deposit data for validator activation
+
+### Properties
+
+| Property | Type | Description |
+| ------------------------ | ----------------------------------------------- | ---------------------------------------------------------------------------------- |
+| `distributed_public_key` | `string` | The public key of the distributed validator. |
+| `public_shares` | `string`\[] | The public key of the node distributed validator share. |
+| `deposit_data?` | `Partial` <[`DepositData`](depositdata.md)> | The deposit data for activating the DV. |
+| `partial_deposit_data?` | `Partial` <[`DepositData`](depositdata.md)>\[] | The deposit data with partial amounts or full amount for activating the DV. |
+| `builder_registration?` | [`BuilderRegistration`](builderregistration.md) | pre-generated signed validator builder registration to be sent to builder network. |
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/type-aliases/README.md b/docs/versioned_docs/version-v1.0.0/sdk/type-aliases/README.md
new file mode 100644
index 0000000000..ef07201c1b
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/type-aliases/README.md
@@ -0,0 +1,2 @@
+# type-aliases
+
diff --git a/docs/versioned_docs/version-v1.0.0/sdk/type-aliases/operatorpayload.md b/docs/versioned_docs/version-v1.0.0/sdk/type-aliases/operatorpayload.md
new file mode 100644
index 0000000000..c6db7e6a20
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sdk/type-aliases/operatorpayload.md
@@ -0,0 +1,9 @@
+# OperatorPayload
+
+> **OperatorPayload**: `Partial` <[`ClusterOperator`](../interfaces/clusteroperator.md)> & `Required`<`Pick` <[`ClusterOperator`](../interfaces/clusteroperator.md), `"enr"` | `"version"`>>
+
+A partial view of `ClusterOperator` with `enr` and `version` as required properties.
+
+### Source
+
+types.ts:44
diff --git a/docs/versioned_docs/version-v1.0.0/sec/README.md b/docs/versioned_docs/version-v1.0.0/sec/README.md
new file mode 100644
index 0000000000..aeb3b02cce
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sec/README.md
@@ -0,0 +1,2 @@
+# sec
+
diff --git a/docs/versioned_docs/version-v1.0.0/sec/bug-bounty.md b/docs/versioned_docs/version-v1.0.0/sec/bug-bounty.md
new file mode 100644
index 0000000000..e7207db803
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sec/bug-bounty.md
@@ -0,0 +1,180 @@
+---
+sidebar_position: 2
+description: Bug Bounty Policy
+---
+
+# Obol Bug Bounty Program
+
+## Overview
+
+At Obol Labs, we prioritize the security of our distributed validator software and related services. Our Bug Bounty Program is designed to encourage and reward security researchers for identifying and reporting potential vulnerabilities. This initiative supports our commitment to the security and integrity of our products.
+
+## Participant Eligibility
+
+Participants must meet the following criteria to be eligible for the Bug Bounty Program:
+
+- Not reside in countries where participation in such programs is prohibited.
+- Be at least 14 years of age and possess the legal capacity to participate.
+- Have received consent from your employer, if applicable.
+- Not have been employed or contracted by Obol Labs, nor be an immediate family member of an employee, within the last 12 months.
+
+## Scope of the Program
+
+Eligible submissions must involve software and services developed by Obol, specifically under the domains of:
+
+- Charon the DV Middleware Client
+- Obol DV Launchpad and Public API
+- Obol Splits Contracts
+- Obol Labs hosted Public Relay Infrastructure
+
+Submissions related to the following are considered out of scope:
+
+- Social engineering
+- Rate Limiting (Non-critical issues)
+- Physical security breaches
+- Non-security related UX/UI issues
+- Third-party application vulnerabilities
+- The [Obol](https://obol.tech) static website or the Obol infrastructure
+- The operational security of node operators running or using Obol software
+
+## Program Rules
+
+- Submitted bugs must not have been previously disclosed publicly.
+- Only first reports of vulnerabilities will be considered for rewards; previously reported or known vulnerabilities are ineligible.
+- The severity of the vulnerability, as assessed by our team, will determine the reward amount. See the "Rewards" section for details.
+- Submissions must include a reproducible proof of concept.
+- The Obol security team reserves the right to determine the eligibility and reward for each submission.
+- Program terms may be updated at Obol's discretion.
+- Valid bugs may be disclosed to partner protocols within the Obol ecosystem to enhance overall security.
+
+## Rewards Structure
+
+Rewards are issued based on the severity and impact of the disclosed vulnerability, determined at the discretion of Obol Labs.
+
+### Critical Vulnerabilities: Up to $100,000
+
+A Critical-level vulnerability is one that has a severe impact on the security of the in-production system from an unauthenicated external attacker, and requires immediate attention to fix. Highly likely to have a material impact on validator private key security, and/or loss of funds.
+
+- High impact, high likelihood
+
+Impacts:
+
+- Attacker that is not a member of the cluster can successfully exfiltrate BLS (not K1) private key material from a threshold number of operators in the cluster.
+- Attacker that is not a member of the cluster can achieve the production of arbitrary BLS signatures from a threshold number of operators in the cluster.
+- Attacker can craft a malicious cluster invite capable of subverting even careful review of all data to steal funds during a deposit.
+- Direct theft of any user funds, whether at-rest or in-motion, other than unclaimed yield
+- Direct loss of funds
+- Permanent freezing of funds (fix requires hard fork)
+- Network not being able to confirm new transactions (Total network shutdown)
+- Protocol insolvency
+
+### High Vulnerabilities: Up to $10,000
+
+For significant security risks that impact the system from a position of low-trust and requires a significant effort to fix.
+
+- High impact, medium likelihood
+- Medium impact, high likelihood
+
+Impacts:
+
+- Attacker that is not a member of the cluster can successfully partition the cluster and keep the cluster offline indefinitely.
+- Attacker that is not a member of the cluster can exfiltrate Charon ENR private keys.
+- Attacker that is not a member of the cluster can destroy funds but cannot steal them.
+- Unintended chain split (Network partition)
+- Temporary freezing of network transactions by delaying one block by 500% or more of the average block time of the preceding 24 hours beyond standard difficulty adjustments
+- RPC API crash affecting projects with greater than or equal to 25% of the market capitalization on top of the respective layer
+- Theft of unclaimed yield
+- Theft of unclaimed royalties
+- Permanent freezing of unclaimed yield
+- Permanent freezing of unclaimed royalties
+- Temporary freezing of funds
+- Retrieve sensitive data/files from a running server:
+ - blockchain keys
+ - database passwords
+ - (this does not include non-sensitive environment variables, open source code, or usernames)
+- Taking state-modifying authenticated actions (with or without blockchain state interaction) on behalf of other users without any interaction by that user, such as:
+ - Changing cluster information
+ - Withdrawals
+ - Making trades
+
+### Medium Vulnerabilities: Up to $2,500
+
+For vulnerabilities with a moderate impact, affecting system availability or integrity.
+
+- High impact, low likelihood
+- Medium impact, medium likelihood
+- Low impact, high likelihood
+
+Impacts:
+
+- Attacker that is a member of a cluster can exfiltrate K1 key material from another member.
+- Attacker that is a member of the cluster can denial of service attack enough peers in the cluster to prevent operation of the validator(s)
+- Attacker that is a member of the cluster can bias the protocol in a manner to control the majority of block proposal opportunities.
+- Attacker can get a DV Launchpad user to inadvertently interact with a smart contract that is not a part of normal operation of the launchpad.
+- Increasing network processing node resource consumption by at least 30% without brute force actions, compared to the preceding 24 hours
+- Shutdown of greater than or equal to 30% of network processing nodes without brute force actions, but does not shut down the network
+- Charon cluster identity private key theft
+- Rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle
+- Charon public relay node is compromised and lead to cluster topologies getting discovered and disrupted
+- Smart contract unable to operate due to lack of token funds
+- Block stuffing
+- Griefing (e.g. no profit motive for an attacker, but damage to the users or the protocol)
+- Theft of gas
+- Unbounded gas consumption
+- Redirecting users to malicious websites (Open Redirect)
+
+### Low Vulnerabilities: Up to $500
+
+For vulnerabilities with minimal impact, unlikely to significantly affect system operations.
+
+- Low impact, medium likelihood
+- Medium impact, low likelihood
+
+Impacts:
+
+- Attacker can sometimes put a Charon node in a state that causes it to drop one out of every one hundred attestations made by a validator
+- Attacker can display bad data on a non-interactive part of the launchpad.
+- Contract fails to deliver promised returns, but doesn't lose value
+- Shutdown of greater than 10% or equal to but less than 30% of network processing nodes without brute force actions, but does not shut down the network
+- Changing details of other users (including modifying browser local storage) without already-connected wallet interaction and with significant user interaction such as:
+ - Iframing leading to modifying the backend/browser state (must demonstrate impact with PoC)
+- Taking over broken or expired outgoing links such as:
+ - Social media handles, etc.
+- Temporarily disabling user to access target site, such as:
+ - Locking up the victim from login
+ - Cookie bombing, etc.
+
+Rewards may be issued as cash, merchandise, or other forms of recognition, at Obol's discretion. Only one reward will be granted per unique vulnerability.
+
+## The following activities are prohibited by this bug bounty program
+
+- Any testing on mainnet or public testnet deployed code; all testing should be done on local-forks of either public testnet or mainnet
+- Any testing with pricing oracles or third-party smart contracts
+- Attempting phishing or other social engineering attacks against our employees and/or customers
+- Any testing with third-party systems and applications (e.g. browser extensions) as well as websites (e.g. SSO providers, advertising networks)
+- Any denial of service attacks that are executed against project assets
+- Automated testing of services that generates significant amounts of traffic
+- Public disclosure of an unpatched vulnerability in an embargoed bounty
+
+## Submission process
+
+To report a vulnerability, please contact us at security@obol.tech with:
+
+- A detailed description of the vulnerability and its potential impact.
+- Steps to reproduce the issue.
+- Any relevant proof of concept code, screenshots, or documentation.
+- Your contact information.
+
+Incomplete reports may not be eligible for rewards.
+
+## Disclosure and Confidentiality
+
+Obol Labs will disclose vulnerabilities and the identity of the researcher (with consent) after remediation. Researchers are required to maintain confidentiality until official disclosure by Obol Labs.
+
+## Legal and Ethical Compliance
+
+Participants must adhere to all relevant laws and regulations. Obol Labs will not pursue legal action against researchers reporting vulnerabilities in good faith, but reserves the right to respond to violations of this policy.
+
+## Non-Disclosure Agreement (NDA)
+
+Participants may be required to sign an NDA for access to certain proprietary information during their research.
diff --git a/docs/versioned_docs/version-v1.0.0/sec/contact.md b/docs/versioned_docs/version-v1.0.0/sec/contact.md
new file mode 100644
index 0000000000..e66e1663e2
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sec/contact.md
@@ -0,0 +1,10 @@
+---
+sidebar_position: 3
+description: Security details for the Obol Network
+---
+
+# Contacts
+
+Please email security@obol.tech to report a security incident, vulnerability, bug or inquire about Obol's security.
+
+Also, visit the [obol security repo](https://github.com/ObolNetwork/obol-security) for more details.
diff --git a/docs/versioned_docs/version-v1.0.0/sec/ev-assessment.md b/docs/versioned_docs/version-v1.0.0/sec/ev-assessment.md
new file mode 100644
index 0000000000..3b27ebc681
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sec/ev-assessment.md
@@ -0,0 +1,295 @@
+---
+sidebar_position: 4
+description: Software Development Security Assessment
+---
+
+# ev-assessment
+
+## Software Development at Obol
+
+When hardening a projects technical security, team member's operational security, and the security of the software development practices in use by the team are some of the most criticial areas to secure. Many hacks and compromises in the space to date have been a result of these attack vectors rather than exploits of the software itself.
+
+With this in mind, in January 2023 the Obol team retained the expertise of Ethereal Venture's security researcher Alex Wade; to interview key stakeholders and produce a report into the teams Software Development Lifecycle.
+
+The below page is a result of the report that was produced. What is present here has had some sensitive information redacted, and contains responses to the recommendations made, detailing the actions the Obol team have taken to mitigate what has been highlighted.
+
+## Obol Report
+
+**Prepared by: Alex Wade (Ethereal Ventures)** **Date: Jan 2023**
+
+Over the past month, I worked with Obol to review their software development practices in preparation for their upcoming security audits. My goals were to review and analyze:
+
+* Software development processes
+* Vulnerability disclosure and escalation procedures
+* Key personnel risk
+
+The information in this report was collected through a series of interviews with Obol’s project leads.
+
+### Contents
+
+* Background Info
+* Analysis - Cluster Setup and DKG
+ * Key Risks
+ * Potential Attack Scenarios
+* Recommendations
+ * R1: Users should deploy cluster contracts through a known on-chain entry point
+ * R2: Users should deposit to the beacon chain through a pool contract
+ * R3: Raise the barrier to entry to push an update to the Launchpad
+* Additional Notes
+ * Vulnerability Disclosure
+ * Key Personnel Risk
+
+### Background Info
+
+**Each team lead was asked to describe Obol in terms of its goals, objectives, and key features.**
+
+#### What is Obol?
+
+Obol builds DVT (Distributed Validator Technology) for Ethereum.
+
+#### What is Obol’s goal?
+
+Obol’s goal is to solve a classic distributed systems problem: uptime.
+
+Rather than requiring Ethereum validators to stake on their own, Obol allows groups of operators to stake together. Using Obol, a single validator can be run cooperatively by multiple people across multiple machines.
+
+In theory, this architecture provides validators with some redundancy against common issues: server and power outages, client failures, and more.
+
+#### What are Obol’s objectives?
+
+Obol’s business objective is to provide base-layer infrastructure to support a distributed validator ecosystem. As Obol provides base layer technology, other companies and projects will build on top of Obol.
+
+Obol’s business model is to eventually capture a portion of the revenue generated by validators that use Obol infrastructure.
+
+#### What is Obol’s product?
+
+Obol’s product consists of three main components, each run by its own team: a webapp, a client, and smart contracts.
+
+* [DV Launchpad](../dvl/intro.md): A webapp to create and manage distributed validators.
+* [Charon](../charon/intro.md): A middleware client that enables operators to run distributed validators.
+* [Solidity](../sc/introducing-obol-splits.md): Withdrawal and fee recipient contracts for use with distributed validators.
+
+### Analysis - Cluster Setup and DKG
+
+The Launchpad guides users through the process of creating a cluster, which defines important parameters like the validator’s fee recipient and withdrawal addresses, as well as the identities of the operators in the cluster. In order to ensure their cluster configuration is correct, users need to rely on a few different factors.
+
+**First, users need to trust the Charon client** to perform the DKG correctly, and validate things like:
+
+* Config file is well-formed and is using the expected version
+* Signatures and ENRs from other operators are valid
+* Cluster config hash is correct
+* DKG succeeds in producing valid signatures
+* Deposit data is well-formed and is correctly generated from the cluster config and DKG.
+
+However, Charon’s validation is limited to the digital: signature checks, cluster file syntax, etc. It does NOT help would-be operators determine whether the other operators listed in their cluster definition are the real people with whom they intend to start a DVT cluster. So -
+
+**Second, users need to come to social consensus with fellow operators.** While the cluster is being set up, it’s important that each operator is an active participant. Each member of the group must validate and confirm that:
+
+* the cluster file correctly reflects their address and node identity, and reflects the information they received from fellow operators
+* the cluster parameters are expected – namely, the number of validators and signing threshold
+
+**Finally, users need to perform independent validation.** Each user should perform their own validation of the cluster definition:
+
+* Is my information correct? (address and ENR)
+* Does the information I received from the group match the cluster definition?
+* Is the ETH2 deposit data correct, and does it match the information in the cluster definition?
+* Are the withdrawal and fee recipient addresses correct?
+
+These final steps are potentially the most difficult, and may require significant technical knowledge.
+
+### Key Risks
+
+#### 1. Validation of Contract Deployment and Deposit Data Relies Heavily on Launchpad
+
+From my interviews, it seems that the user deploys both the withdrawal and fee recipient contracts through the Launchpad.
+
+What I’m picturing is that during the first parts of the cluster setup process, the user is prompted to sign one or more transactions deploying the withdrawal and fee recipient contracts to mainnet. The Launchpad apparently uses an npm package to deploy these contracts: `0xsplits/splits-sdk`, which I assume provides either JSON artifacts or a factory address on chain. The Launchpad then places the deployed contracts into the cluster config file, and the process moves on.
+
+If an attacker has published a malicious update to the Launchpad (or compromised an underlying dependency), the contracts deployed by the Launchpad may be malicious. The questions I’d like to pose are:
+
+* How does the group creator know the Launchpad deployed the correct contracts?
+* How does the rest of the group know the creator deployed the contracts through the Launchpad?
+
+My understanding is that this ultimately comes down to the independent verification that each of the group’s members performs during and after the cluster’s setup phase.
+
+At its worst, this verification might consist solely of the cluster creator confirming to the others that, yes, those addresses match the contracts I deployed through the Launchpad.
+
+A more sophisticated user might verify that not only do the addresses match, but the deployed source code looks roughly correct. However, this step is far out of the realm of many would-be validators. To be really certain that the source code is correct would require auditor-level knowledge.
+
+The risk is that:
+
+* the deployed contracts are NOT the correctly-configured 0xsplits waterfall/fee splitter contracts
+* most users are ill-equipped to make this determination themselves
+* we don’t want to trust the Launchpad as the single source of truth
+
+In the worst case, the cluster may end up depositing with malicious withdrawal or fee recipient credentials. If unnoticed, this may net an attacker the entire withdrawal amount, once the cluster exits.
+
+Note that the same (or similar) risks apply to validation of deposit data, which has the potential to be similarly difficult. I’m a little fuzzy on which part of the Obol stack actually generates the deposit data / deposit transaction, so I can’t speak to this as much. However, I think the mitigation for both of these is roughly the same - read on!
+
+**Mitigation:**
+
+It’s certainly a good idea to make it harder to deploy malicious updates to the Launchpad, but this may not be entirely possible. A higher-yield strategy may be to educate and empower users to perform independent validation of the DVT setup process - without relying on information fed to them by Charon and the Launchpad.
+
+I’ve outlined some ideas for this in #R1 and #R2.
+
+#### 2. Social Consensus, aka “Who sends the 32 ETH?”
+
+Depositing to the beacon chain requires a total of 32 ETH. Obol’s product allows multiple operators to act as a single validator together, which means would-be operators need to agree on how to fund the 32 ETH needed to initiate the deposit.
+
+It is my understanding that currently, this process comes down to trust and loose social consensus. Essentially, the group needs to decide who chips in what amount together, and then trust someone to take the 32 ETH and complete the deposit process correctly (without running away with the money).
+
+Granted, the initial launch of Obol will be open only to a small group of people as the kinks in the system get worked out - but in preparation for an eventual public release, the deposit process needs to be much simpler and far less reliant on trust.
+
+Mitigation: See #R2.
+
+**Potential Attack Scenarios**
+
+During the interview process, I learned that each of Obol’s core components has its own GitHub repo, and that each repo has roughly the same structure in terms of organization and security policies. For each repository:
+
+* There are two overall github organization administrators, and a number of people have administrative control over individual repositories.
+* In order to merge PRs, the submitter needs:
+ * CI/CD checks to pass
+ * Review from one person (anyone at Obol)
+
+Of course, admin access also means the ability to change these settings - so repo admins could theoretically merge PRs without needing checks to pass, and without review/approval, organization admins can control the full GitHub organization.
+
+The following scenarios describe the impact an attack may have.
+
+**1. Publishing a malicious version of the Launchpad, or compromising an underlying dependency**
+
+* Reward: High
+* Difficulty: Medium-Low
+
+As described in Key Risks, publishing a malicious version of the Launchpad has the potential to net the largest payout for an attacker. By tampering with the cluster’s deposit data or withdrawal/fee recipient contracts, an attacker stands to gain 32 ETH or more per compromised cluster.
+
+During the interviews, I learned that merging PRs to main in the Launchpad repo triggers an action that publishes to the site. Given that merges can be performed by an authorized Obol developer, this makes the developers prime targets for social engineering attacks.
+
+Additionally, the use of the `0xsplits/splits-sdk` NPM package to aid in contract deployment may represent a supply chain attack vector. It may be that this applies to other Launchpad dependencies as well.
+
+In any case, with a fairly large surface area and high potential reward, this scenario represents a credible risk to users during the cluster setup and DKG process.
+
+See #R1, #R2, and #R3 for some ideas to address this scenario.
+
+**2. Publishing a malicious version of Charon to new operators**
+
+* Reward: Medium
+* Difficulty: High
+
+During the cluster setup process, Charon is responsible both for validating the cluster configuration produced by the Launchpad, as well as performing a DKG ceremony between a group’s operators.
+
+If new operators use a malicious version of Charon to perform this process, it may be possible to tamper with both of these responsibilities, or even get access to part or all of the underlying validator private key created during DKG.
+
+However, the difficulty of this type of attack seems quite high. An attacker would first need to carry out the same type of social engineering attack described in scenario 1 to publish and tag a new version of Charon. Crucially, users would also need to install the malicious version - unlike the Launchpad, an update here is not pushed directly to users.
+
+As long as Obol is clear and consistent with communication around releases and versioning, it seems unlikely that a user would both install a brand-new, unannounced release, and finish the cluster setup process before being warned about the attack.
+
+**3. Publishing a malicious version of Charon to existing validators**
+
+* Reward: Low
+* Difficulty: High
+
+Once a distributed validator is up and running, much of the danger has passed. As a middleware client, Charon sits between a validator’s consensus and validator clients. As such, it shouldn’t have direct access to a validator’s withdrawal keys nor signing keys.
+
+If existing validators update to a malicious version of Charon, it’s likely the worst thing an attacker could theoretically do is slash the validator, however, assuming Charon has no access to any private keys, this would be predicated on one or more validator clients connected to Charon also failing to prevent the signing of a slashable message. In practice, a compromised Charon client is more likely to pose liveness risks than safety risks.
+
+This is not likely to be particularly motivating to potential attackers - and paired with the high difficulty described above, this scenario seems unlikely to cause significant issues.
+
+### Recommendations
+
+#### R1: Users should deploy cluster contracts through a known on-chain entry point
+
+During setup, users should only sign one transaction via the Launchpad - to a contract located at an Obol-held ENS (e.g. `launchpad.obol.eth`). This contract should deploy everything needed for the cluster to operate, like the withdrawal and fee recipient contracts. It should also initialize them with the provided reward split configuration (and any other config needed).
+
+Rather than using an NPM library to supply a factory address or JSON artifacts, this has the benefit of being both:
+
+* **Harder to compromise:** as long as the user knows launchpad.obol.eth, it’s pretty difficult to trick them into deploying the wrong contracts.
+* **Easier to validate** for non-technical users: the Obol contract can be queried for deployment information via etherscan. For example:
+
+
+
+Note that in order for this to be successful, Obol needs to provide detailed steps for users to perform manual validation of their cluster setups. Users should be able to treat this as a “checklist:”
+
+* Did I send a transaction to `launchpad.obol.eth`?
+* Can I use the ENS name to locate and query the deployment manager contract on etherscan?
+* If I input my address, does etherscan report the configuration I was expecting?
+ * withdrawal address matches
+ * fee recipient address matches
+ * reward split configuration matches
+
+As long as these steps are plastered all over the place (i.e. not just on the Launchpad) and Obol puts in effort to educate users about the process, this approach should allow users to validate cluster configurations themselves - regardless of Launchpad or NPM package compromise.
+
+**Obol’s response**
+
+Roadmapped: add the ability for the OWR factory to claim and transfer its reverse resolution ownership.
+
+#### R2: Users should deposit to the beacon chain through a pool contract
+
+Once cluster setup and DKG is complete, a group of operators should deposit to the beacon chain by way of a pool contract. The pool contract should:
+
+* Accept Eth from any of the group’s operators
+* Stop accepting Eth when the contract’s balance hits (32 ETH \* number of validators)
+* Make it easy to pull the trigger and deposit to the beacon chain once the critical balance has been reached
+* Offer all of the group’s operators a “bail” option at any point before the deposit is triggered
+
+Ideally, this contract is deployed during the setup process described in #R1, as another step toward allowing users to perform independent validation of the process.
+
+Rather than relying on social consensus, this should:
+
+* Allow operators to fund the validator without needing to trust any single party
+* Make it harder to mess up the deposit or send funds to some malicious actor, as the pool contract should know what the beacon deposit contract address is
+
+**Obol’s response**
+
+Roadmapped: give the operators a streamlined, secure way to deposit Ether (ETH) to the beacon chain collectively, satisfying specific conditions:
+
+* Pooling from multiple operators.
+* Ceasing to accept ETH once a critical balance is reached, defined by 32 ETH multiplied by the number of validators.
+* Facilitating an immediate deposit to the beacon chain once the target balance is reached.
+* Provide a 'bail-out' option for operators to withdraw their contribution before initiating the group's deposit to the beacon chain.
+
+#### R3: Raise the barrier to entry to push an update to the Launchpad
+
+Currently, any repo admin can publish an update to the Launchpad unchecked.
+
+Given the risks and scenarios outlined above, consider amending this process so that the sole compromise of either admin is not sufficient to publish to the Launchpad site. It may be worthwhile to require both admins to approve publishing to the site.
+
+Along with simply adding additional prerequisites to publish an update to the Launchpad, ensure that both admins have enabled some level of multi-factor authentication on their GitHub accounts.
+
+**Obol’s response**
+
+We removed individual’s ability to merge changes without review, enforced MFA, signed commits, and employed Bulldozer bot to make sure a PR gets merged automatically when all checks pass.
+
+### Additional Notes
+
+#### Vulnerability Disclosure
+
+During the interviews, I got some conflicting information when asking about Obol’s vulnerability disclosure process.
+
+Some interviewees directed me towards Obol’s security repo, which details security contacts: [ObolNetwork/obol-security](https://github.com/ObolNetwork/obol-security), while some answered that disclosure should happen primarily through Immunefi. While these may both be part of the correct answer, it seems that Obol’s disclosure process may not be as well-defined as it could be. Here are some notes:
+
+* I wasn’t able to find information about Obol on Immunefi. I also didn’t find any reference to a security contact or disclosure policy in Obol’s docs.
+* When looking into the obol security repo, I noticed broken links in a few of the sections in README.md and SECURITY.md:
+ * Security policy
+ * More Information
+* Some of the text and links in the Bug Bounty Program don’t seem to apply to Obol (see text referring to Vaults and Strategies).
+* The Receiving Disclosures section does not include a public key with which submitters can encrypt vulnerability information.
+
+It’s my understanding that these items are probably lower priority due to Obol’s initial closed launch - but these should be squared away soon! \[Obol response to latest vuln disclosure process goes here]
+
+**Obol’s response**
+
+we addressed all of the concerns in the obol-security repository:
+
+1. The security policy link has been fixed
+2. The Bug Bounty program received an overhaul and clearly states rewards, eligibility, and scope
+3. We list two GPG public keys for which we accept encrypted vulnerabilities reports.
+
+We are actively working towards integrating Immunefi in our security pipeline.
+
+#### Key Personnel Risk
+
+A final section on the specifics of key personnel risk faced by Obol has been redacted from the original report. Particular areas of control highlighted were github org ownership and domain name control.
+
+**Obol’s response**
+
+These risks have been mitigated by adding an extra admin to the github org, and by setting up a second DNS stack in case the primary one fails, along with general Opsec improvements.
diff --git a/docs/versioned_docs/version-v1.0.0/sec/overview.md b/docs/versioned_docs/version-v1.0.0/sec/overview.md
new file mode 100644
index 0000000000..30688d65f5
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sec/overview.md
@@ -0,0 +1,36 @@
+---
+sidebar_position: 1
+description: Security Overview
+---
+
+# Overview
+
+This page serves as an overview of the Obol Network from a security point of view.
+
+This page is updated quarterly. The last update was on 2024-June-19.
+
+## Table of Contents
+
+* [Overview](overview.md#overview)
+ * [Table of Contents](overview.md#table-of-contents)
+ * [List of Security Audits and Assessments](overview.md#list-of-security-audits-and-assessments)
+ * [Security focused documents](overview.md#security-focused-documents)
+ * [Bug Bounty](overview.md#bug-bounty)
+
+## List of Security Audits and Assessments
+
+The completed audits reports are linked [here](https://github.com/ObolNetwork/obol-security/tree/main/audits).
+
+* A review of Obol Labs [development processes](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/sec/ev-assessment/README.md) by [Ethereal Ventures](https://www.etherealventures.com/).
+* A [security assessment](https://github.com/ObolNetwork/obol-security/blob/f9d7b0ad0bb8897f74ccb34cd4bd83012ad1d2b5/audits/Sigma_Prime_Obol_Network_Charon_Security_Assessment_Report_v2_1.pdf) of Charon by [Sigma Prime](https://sigmaprime.io/) resulting in version [`v0.16.0`](https://github.com/ObolNetwork/charon/releases/tag/v0.16.0).
+* A second [assessment of Charon](https://obol.tech/charon_quantstamp_assessment.pdf) by [QuantStamp](https://quantstamp.com/) resulting in version [`v0.19.1`](https://github.com/ObolNetwork/charon/releases/tag/v0.19.1).
+* A [solidity audit](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/sec/smart_contract_audit/README.md) of the Obol Splits contracts by [Zach Obront](https://zachobront.com/).
+* A [penetration testing certificate](https://github.com/ObolNetwork/obol-security/blob/main/audits/Sayfer_2024-03_Penetration_Testing_CFD.pdf) of the Obol DV Launchpad by [Sayfer](https://sayfer.io/).
+
+## Security focused documents
+
+* A [threat model](https://github.com/ObolNetwork/obol-docs/blob/main/versioned_docs/version-v1.0.0/sec/threat_model/README.md) for a DV middleware client like Charon.
+
+## Bug Bounty
+
+Information related to disclosing bugs and vulnerabilities to Obol can be found on [the next page](bug-bounty.md).
diff --git a/docs/versioned_docs/version-v1.0.0/sec/smart_contract_audit.md b/docs/versioned_docs/version-v1.0.0/sec/smart_contract_audit.md
new file mode 100644
index 0000000000..5f079f2997
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sec/smart_contract_audit.md
@@ -0,0 +1,477 @@
+---
+sidebar_position: 5
+description: Smart Contract Audit
+---
+
+# Smart Contract Audit
+
+| | |
+| ------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|  | Obol Audit Report
Obol Manager Contracts
Prepared by: Zach Obront, Independent Security Researcher
Date: Sept 18 to 22, 2023
|
+
+## About **Obol**
+
+The Obol Network is an ecosystem for trust minimized staking that enables people to create, test, run & co-ordinate distributed validators.
+
+The Obol Manager contracts are responsible for distributing validator rewards and withdrawals among the validator and node operators involved in a distributed validator.
+
+## About **zachobront**
+
+Zach Obront is an independent smart contract security researcher. He serves as a Lead Senior Watson at Sherlock, a Security Researcher at Spearbit, and has identified multiple critical severity bugs in the wild, including in a Top 5 Protocol on Immunefi. You can say hi on Twitter at [@zachobront](http://twitter.com/zachobront).
+
+## Summary & Scope
+
+The [ObolNetwork/obol-manager-contracts](https://github.com/ObolNetwork/obol-manager-contracts/) repository was audited at commit [50ce277919723c80b96f6353fa8d1f8facda6e0e](https://github.com/ObolNetwork/obol-manager-contracts/tree/50ce277919723c80b96f6353fa8d1f8facda6e0e).
+
+The following contracts were in scope:
+
+* src/controllers/ImmutableSplitController.sol
+* src/controllers/ImmutableSplitControllerFactory.sol
+* src/lido/LidoSplit.sol
+* src/lido/LidoSplitFactory.sol
+* src/owr/OptimisticWithdrawalReceiver.sol
+* src/owr/OptimisticWithdrawalReceiverFactory.sol
+
+After completion of the fixes, the [2f4f059bfd145f5f05d794948c918d65d222c3a9](https://github.com/ObolNetwork/obol-manager-contracts/tree/2f4f059bfd145f5f05d794948c918d65d222c3a9) commit was reviewed. After this review, the updated Lido fee share system in [PR #96](https://github.com/ObolNetwork/obol-manager-contracts/pull/96/files) (at commit [fd244a05f964617707b0a40ebb11b523bbd683b8](https://github.com/ObolNetwork/obol-splits/pull/96/commits/fd244a05f964617707b0a40ebb11b523bbd683b8)) was reviewed.
+
+## Summary of Findings
+
+| Identifier | Title | Severity | Fixed |
+| :-----------------------------------------------------------------------------------------------------------------------: | -------------------------------------------------------------------------------------- | :-----------: | :---: |
+| [M-01](smart_contract_audit.md#m-01-future-fees-may-be-skirted-by-setting-a-non-eth-reward-token) | Future fees may be skirted by setting a non-ETH reward token | Medium | ✓ |
+| [M-02](smart_contract_audit.md#m-02-splits-with-256-or-more-node-operators-will-not-be-able-to-switch-on-fees) | Splits with 256 or more node operators will not be able to switch on fees | Medium | ✓ |
+| [M-03](smart_contract_audit.md#m-03-in-a-mass-slashing-event-node-operators-are-incentivized-to-get-slashed) | In a mass slashing event, node operators are incentivized to get slashed | Medium | |
+| [L-01](smart_contract_audit.md#l-01-obol-fees-will-be-applied-retroactively-to-all-non-distributed-funds-in-the-splitter) | Obol fees will be applied retroactively to all non-distributed funds in the Splitter | Low | ✓ |
+| [L-02](smart_contract_audit.md#l-02-if-owr-is-used-with-rebase-tokens-and-theres-a-negative-rebase-principal-can-be-lost) | If OWR is used with rebase tokens and there's a negative rebase, principal can be lost | Low | ✓ |
+| [L-03](smart_contract_audit.md#l-03-lidosplit-can-receive-eth-which-will-be-locked-in-contract) | LidoSplit can receive ETH, which will be locked in contract | Low | ✓ |
+| [L-04](smart_contract_audit.md#l-04-upgrade-to-latest-version-of-solady-to-fix-libclone-bug) | Upgrade to latest version of Solady to fix LibClone bug | Low | ✓ |
+| [G-01](smart_contract_audit.md#g-01-steth-and-wsteth-addresses-can-be-saved-on-implementation-to-save-gas) | stETH and wstETH addresses can be saved on implementation to save gas | Gas | ✓ |
+| [G-02](smart_contract_audit.md#g-02-owr-can-be-simplified-and-save-gas-by-not-tracking-distributedfunds) | OWR can be simplified and save gas by not tracking distributedFunds | Gas | ✓ |
+| [I-01](smart_contract_audit.md#i-01-strong-trust-assumptions-between-validators-and-node-operators) | Strong trust assumptions between validators and node operators | Informational | |
+| [I-02](smart_contract_audit.md#i-02-provide-node-operator-checklist-to-validate-setup) | Provide node operator checklist to validate setup | Informational | |
+
+## Detailed Findings
+
+### \[M-01] Future fees may be skirted by setting a non-ETH reward token
+
+Fees are planned to be implemented on the `rewardRecipient` splitter by updating to a new fee structure using the `ImmutableSplitController`.
+
+It is assumed that all rewards will flow through the splitter, because (a) all distributed rewards less than 16 ETH are sent to the `rewardRecipient`, and (b) even if a team waited for rewards to be greater than 16 ETH, rewards sent to the `principalRecipient` are capped at the `amountOfPrincipalStake`.
+
+This creates a fairly strong guarantee that reward funds will flow to the `rewardRecipient`. Even if a user were to set their `amountOfPrincipalStake` high enough that the `principalRecipient` could receive unlimited funds, the Obol team could call `distributeFunds()` when the balance got near 16 ETH to ensure fees were paid.
+
+However, if the user selects a non-ETH token, all ETH will be withdrawable only thorugh the `recoverFunds()` function. If they set up a split with their node operators as their `recoveryAddress`, all funds will be withdrawable via `recoverFunds()` without ever touching the `rewardRecipient` or paying a fee.
+
+#### Recommendation
+
+I would recommend removing the ability to use a non-ETH token from the `OptimisticWithdrawalRecipient`. Alternatively, if it feels like it may be a use case that is needed, it may make sense to always include ETH as a valid token, in addition to any `OWRToken` set.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[M-02] Splits with 256 or more node operators will not be able to switch on fees
+
+0xSplits is used to distribute rewards across node operators. All Splits are deployed with an ImmutableSplitController, which is given permissions to update the split one time to add a fee for Obol at a future date.
+
+The Factory deploys these controllers as Clones with Immutable Args, hard coding the `owner`, `accounts`, `percentAllocations`, and `distributorFee` for the future update. This data is packed as follows:
+
+```solidity
+ function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+ ) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
+ uint256[] memory recipients = new uint[](recipientsSize);
+
+ uint256 i = 0;
+ for (; i < recipientsSize;) {
+ recipients[i] = (uint256(percentAllocations[i]) << ADDRESS_BITS) | uint256(uint160(accounts[i]));
+
+ unchecked {
+ i++;
+ }
+ }
+
+ data = abi.encodePacked(splitMain, distributorFee, owner, uint8(recipientsSize), recipients);
+ }
+```
+
+In the process, `recipientsSize` is unsafely downcasted into a `uint8`, which has a maximum value of `256`. As a result, any values greater than 256 will overflow and result in a lower value of `recipients.length % 256` being passed as `recipientsSize`.
+
+When the Controller is deployed, the full list of `percentAllocations` is passed to the `validSplit` check, which will pass as expected. However, later, when `updateSplit()` is called, the `getNewSplitConfiguation()` function will only return the first `recipientsSize` accounts, ignoring the rest.
+
+```solidity
+ function getNewSplitConfiguration()
+ public
+ pure
+ returns (address[] memory accounts, uint32[] memory percentAllocations)
+ {
+ // fetch the size first
+ // then parse the data gradually
+ uint256 size = _recipientsSize();
+ accounts = new address[](size);
+ percentAllocations = new uint32[](size);
+
+ uint256 i = 0;
+ for (; i < size;) {
+ uint256 recipient = _getRecipient(i);
+ accounts[i] = address(uint160(recipient));
+ percentAllocations[i] = uint32(recipient >> ADDRESS_BITS);
+ unchecked {
+ i++;
+ }
+ }
+ }
+```
+
+When `updateSplit()` is eventually called on `splitsMain` to turn on fees, the `validSplit()` check on that contract will revert because the sum of the percent allocations will no longer sum to `1e6`, and the update will not be possible.
+
+#### Proof of Concept
+
+The following test can be dropped into a file in `src/test` to demonstrate that passing 400 accounts will result in a `recipientSize` of `400 - 256 = 144`:
+
+```solidity
+// SPDX-License-Identifier: MIT
+pragma solidity ^0.8.0;
+
+import { Test } from "forge-std/Test.sol";
+import { console } from "forge-std/console.sol";
+import { ImmutableSplitControllerFactory } from "src/controllers/ImmutableSplitControllerFactory.sol";
+import { ImmutableSplitController } from "src/controllers/ImmutableSplitController.sol";
+
+interface ISplitsMain {
+ function createSplit(address[] calldata accounts, uint32[] calldata percentAllocations, uint32 distributorFee, address controller) external returns (address);
+}
+
+contract ZachTest is Test {
+ function testZach_RecipientSizeCappedAt256Accounts() public {
+ vm.createSelectFork("https://mainnet.infura.io/v3/fb419f740b7e401bad5bec77d0d285a5");
+
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](400);
+ uint32[] memory bigPercentAllocations = new uint32[](400);
+
+ for (uint i = 0; i < 400; i++) {
+ bigAccounts[i] = address(uint160(i));
+ bigPercentAllocations[i] = 2500;
+ }
+
+ // confirmation that 0xSplits will allow creating a split with this many accounts
+ // dummy acct passed as controller, but doesn't matter for these purposes
+ address split = ISplitsMain(0x2ed6c4B5dA6378c7897AC67Ba9e43102Feb694EE).createSplit(bigAccounts, bigPercentAllocations, 0, address(8888));
+
+ ImmutableSplitController controller = factory.createController(split, owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+
+ // added a public function to controller to read recipient size directly
+ uint savedRecipientSize = controller.ZachTest__recipientSize();
+ assert(savedRecipientSize < 400);
+ console.log(savedRecipientSize); // 144
+ }
+}
+```
+
+#### Recommendation
+
+When packing the data in `_packSplitControllerData()`, check `recipientsSize` before downcasting to a uint8:
+
+```diff
+function _packSplitControllerData(
+ address owner,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+) internal view returns (bytes memory data) {
+ uint256 recipientsSize = accounts.length;
++ if (recipientsSize > 256) revert InvalidSplit__TooManyAccounts(recipientSize);
+ ...
+}
+```
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[M-03] In a mass slashing event, node operators are incentivized to get slashed
+
+When the `OptimisticWithdrawalRecipient` receives funds from the beacon chain, it uses the following rule to determine the allocation:
+
+> If the amount of funds to be distributed is greater than or equal to 16 ether, it is assumed that it is a withdrawal (to be returned to the principal, with a cap on principal withdrawals of the total amount they deposited).
+
+> Otherwise, it is assumed that the funds are rewards.
+
+This value being as low as 16 ether protects against any predictable attack the node operator could perform. For example, due to the effect of hysteresis in updating effective balances, it does not seem to be possible for node operators to predictably bleed a withdrawal down to be below 16 ether (even if they timed a slashing perfectly).
+
+However, in the event of a mass slashing event, slashing punishments can be much more severe than they otherwise would be. To calculate the size of a slash, we:
+
+* take the total percentage of validator stake slashed in the 18 days preceding and following a user's slash
+* multiply this percentage by 3 (capped at 100%)
+* the full slashing penalty for a given validator equals 1/32 of their stake, plus the resulting percentage above applied to the remaining 31/32 of their stake
+
+In order for such penalties to bring the withdrawal balance below 16 ether (assuming a full 32 ether to start), we would need the percentage taken to be greater than `15 / 31 = 48.3%`, which implies that `48.3 / 3 = 16.1%` of validators would need to be slashed.
+
+Because the measurement is taken from the 18 days before and after the incident, node operators would have the opportunity to see a mass slashing event unfold, and later decide that they would like to be slashed along with it.
+
+In the event that they observed that greater than 16.1% of validators were slashed, Obol node operators would be able to get themselves slashed, be exited with a withdrawal of less than 16 ether, and claim that withdrawal as rewards, effectively stealing from the principal recipient.
+
+#### Recommendations
+
+Find a solution that provides a higher level of guarantee that the funds withdrawn are actually rewards, and not a withdrawal.
+
+#### Review
+
+Acknowledged. We believe this is a black swan event. It would require a major ETH client to be compromised, and would be a betrayal of trust, so likely not EV+ for doxxed operators. Users of this contract with unknown operators should be wary of such a risk.
+
+### \[L-01] Obol fees will be applied retroactively to all non-distributed funds in the Splitter
+
+When Obol decides to turn on fees, a call will be made to `ImmutableSplitController::updateSplit()`, which will take the predefined split parameters (the original user specified split with Obol's fees added in) and call `updateSplit()` to implement the change.
+
+```solidity
+function updateSplit() external payable {
+ if (msg.sender != owner()) revert Unauthorized();
+
+ (address[] memory accounts, uint32[] memory percentAllocations) = getNewSplitConfiguration();
+
+ ISplitMain(splitMain()).updateSplit(split, accounts, percentAllocations, uint32(distributorFee()));
+}
+```
+
+If we look at the code on `SplitsMain`, we can see that this `updateSplit()` function is applied retroactively to all funds that are already in the split, because it updates the parameters without performing a distribution first:
+
+```solidity
+function updateSplit(
+ address split,
+ address[] calldata accounts,
+ uint32[] calldata percentAllocations,
+ uint32 distributorFee
+)
+ external
+ override
+ onlySplitController(split)
+ validSplit(accounts, percentAllocations, distributorFee)
+{
+ _updateSplit(split, accounts, percentAllocations, distributorFee);
+}
+```
+
+This means that any funds that have been sent to the split but have not yet be distributed will be subject to the Obol fee. Since these splitters will be accumulating all execution layer fees, it is possible that some of them may have received large MEV bribes, where this after-the-fact fee could be quite expensive.
+
+#### Recommendation
+
+The most strict solution would be for the `ImmutableSplitController` to store both the old split parameters and the new parameters. The old parameters could first be used to call `distributeETH()` on the split, and then `updateSplit()` could be called with the new parameters.
+
+If storing both sets of values seems too complex, the alternative would be to require that `split.balance <= 1` to update the split. Then the Obol team could simply store the old parameters off chain to call `distributeETH()` on each split to "unlock" it to update the fees.
+
+(Note that for the second solution, the ETH balance should be less than or equal to 1, not 0, because 0xSplits stores empty balances as `1` for gas savings.)
+
+#### Review
+
+Fixed as recommended in [PR 86](https://github.com/ObolNetwork/obol-manager-contracts/pull/86).
+
+### \[L-02] If OWR is used with rebase tokens and there's a negative rebase, principal can be lost
+
+The `OptimisticWithdrawalRecipient` is deployed with a specific token immutably set on the clone. It is presumed that that token will usually be ETH, but it can also be an ERC20 to account for future integrations with tokenized versions of ETH.
+
+In the event that one of these integrations used a rebasing version of ETH (like `stETH`), the architecture would need to be set up as follows:
+
+`OptimisticWithdrawalRecipient => rewards to something like LidoSplit.sol => Split Wallet`
+
+In this case, the OWR would need to be able to handle rebasing tokens.
+
+In the event that rebasing tokens are used, there is the risk that slashing or inactivity leads to a period with a negative rebase. In this case, the following chain of events could happen:
+
+* `distribute(PULL)` is called, setting `fundsPendingWithdrawal == balance`
+* rebasing causes the balance to decrease slightly
+* `distribute(PULL)` is called again, so when `fundsToBeDistributed = balance - fundsPendingWithdrawal` is calculated in an unchecked block, it ends up being near `type(uint256).max`
+* since this is more than `16 ether`, the first `amountOfPrincipalStake - _claimedPrincipalFunds` will be allocated to the principal recipient, and the rest to the reward recipient
+* we check that `endingDistributedFunds <= type(uint128).max`, but unfortunately this check misses the issue, because only `fundsToBeDistributed` underflows, not `endingDistributedFunds`
+* `_claimedPrincipalFunds` is set to `amountOfPrincipalStake`, so all future claims will go to the reward recipient
+* the `pullBalances` for both recipients will be set higher than the balance of the contract, and so will be unusable
+
+In this situation, the only way for the principal to get their funds back would be for the full `amountOfPrincipalStake` to hit the contract at once, and for them to call `withdraw()` before anyone called `distribute(PUSH)`. If anyone was to be able to call `distribute(PUSH)` before them, all principal would be sent to the reward recipient instead.
+
+#### Recommendation
+
+Similar to #74, I would recommend removing the ability for the `OptimisticWithdrawalRecipient` to accept non-ETH tokens.
+
+Otherwise, I would recommend two changes for redundant safety:
+
+1. Do not allow the OWR to be used with rebasing tokens.
+2. Move the `_fundsToBeDistributed = _endingDistributedFunds - _startingDistributedFunds;` out of the unchecked block. The case where `_endingDistributedFunds` underflows is already handled by a later check, so this one change should be sufficient to prevent any risk of this issue.
+
+#### Review
+
+Fixed in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85) by removing the ability to use non-ETH tokens.
+
+### \[L-03] LidoSplit can receive ETH, which will be locked in contract
+
+Each new `LidoSplit` is deployed as a clone, which comes with a `receive()` function for receiving ETH.
+
+However, the only function on `LidoSplit` is `distribute()`, which converts `stETH` to `wstETH` and transfers it to the `splitWallet`.
+
+While this contract should only be used for Lido to pay out rewards (which will come in `stETH`), it seems possible that users may accidentally use the same contract to receive other validator rewards (in ETH), or that Lido governance may introduce ETH payments in the future, which would cause the funds to be locked.
+
+#### Proof of Concept
+
+The following test can be dropped into `LidoSplit.t.sol` to confirm that the clones can currently receive ETH:
+
+```solidity
+function testZach_CanReceiveEth() public {
+ uint before = address(lidoSplit).balance;
+ payable(address(lidoSplit)).transfer(1 ether);
+ assertEq(address(lidoSplit).balance, before + 1 ether);
+}
+```
+
+#### Recommendation
+
+Introduce an additional function to `LidoSplit.sol` which wraps ETH into stETH before calling `distribute()`, in order to rescue any ETH accidentally sent to the contract.
+
+#### Review
+
+Fixed in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87/files) by adding a `rescueFunds()` function that can send ETH or any ERC20 (except `stETH` or `wstETH`) to the `splitWallet`.
+
+### \[L-04] Upgrade to latest version of Solady to fix LibClone bug
+
+In the recent [Solady audit](https://github.com/Vectorized/solady/blob/main/audits/cantina-solady-report.pdf), an issue was found the affects LibClone.
+
+In short, LibClone assumes that the length of the immutable arguments on the clone will fit in 2 bytes. If it's larger, it overlaps other op codes and can lead to strange behaviors, including causing the deployment to fail or causing the deployment to succeed with no resulting bytecode.
+
+Because the `ImmutableSplitControllerFactory` allows the user to input arrays of any length that will be encoded as immutable arguments on the Clone, we can manipulate the length to accomplish these goals.
+
+Fortunately, failed deployments or empty bytecode (which causes a revert when `init()` is called) are not problems in this case, as the transactions will fail, and it can only happen with unrealistically long arrays that would only be used by malicious users.
+
+However, it is difficult to be sure how else this risk might be exploited by using the overflow to jump to later op codes, and it is recommended to update to a newer version of Solady where the issue has been resolved.
+
+#### Proof of Concept
+
+If we comment out the `init()` call in the `createController()` call, we can see that the following test "successfully" deploys the controller, but the result is that there is no bytecode:
+
+```solidity
+function testZach__CreateControllerSoladyBug() public {
+ ImmutableSplitControllerFactory factory = new ImmutableSplitControllerFactory(address(9999));
+ bytes32 deploymentSalt = keccak256(abi.encodePacked(uint256(1102)));
+ address owner = address(this);
+
+ address[] memory bigAccounts = new address[](28672);
+ uint32[] memory bigPercentAllocations = new uint32[](28672);
+
+ for (uint i = 0; i < 28672; i++) {
+ bigAccounts[i] = address(uint160(i));
+ if (i < 32) bigPercentAllocations[i] = 820;
+ else bigPercentAllocations[i] = 34;
+ }
+
+ ImmutableSplitController controller = factory.createController(address(8888), owner, bigAccounts, bigPercentAllocations, 0, deploymentSalt);
+ assert(address(controller) != address(0));
+ assert(address(controller).code.length == 0);
+}
+```
+
+#### Recommendation
+
+Delete Solady and clone it from the most recent commit, or any commit after the fixes from [PR #548](https://github.com/Vectorized/solady/pull/548/files#diff-27a3ba4730de4b778ecba4697ab7dfb9b4f30f9e3666d1e5665b194fe6c9ae45) were merged.
+
+#### Review
+
+Solady has been updated to v.0.0.123 in [PR 88](https://github.com/ObolNetwork/obol-manager-contracts/pull/88).
+
+### \[G-01] stETH and wstETH addresses can be saved on implementation to save gas
+
+The `LidoSplitFactory` contract holds two immutable values for the addresses of the `stETH` and `wstETH` tokens.
+
+When new clones are deployed, these values are encoded as immutable args. This adds the values to the contract code of the clone, so that each time a call is made, they are passed as calldata along to the implementation, which reads the values from the calldata for use.
+
+Since these values will be consistent across all clones on the same chain, it would be more gas efficient to store them in the implementation directly, which can be done with `immutable` storage values, set in the constructor.
+
+This would save 40 bytes of calldata on each call to the clone, which leads to a savings of approximately 640 gas on each call.
+
+#### Recommendation
+
+1. Add the following to `LidoSplit.sol`:
+
+```solidity
+address immutable public stETH;
+address immutable public wstETH;
+```
+
+2. Add a constructor to `LidoSplit.sol` which sets these immutable values. Solidity treats immutable values as constants and stores them directly in the contract bytecode, so they will be accessible from the clones.
+3. Remove `stETH` and `wstETH` from `LidoSplitFactory.sol`, both as storage values, arguments to the constructor, and arguments to `clone()`.
+4. Adjust the `distribute()` function in `LidoSplit.sol` to read the storage values for these two addresses, and remove the helper functions to read the clone's immutable arguments for these two values.
+
+#### Review
+
+Fixed as recommended in [PR 87](https://github.com/ObolNetwork/obol-manager-contracts/pull/87).
+
+### \[G-02] OWR can be simplified and save gas by not tracking distributedFunds
+
+Currently, the `OptimisticWithdrawalRecipient` contract tracks four variables:
+
+* distributedFunds: total amount of the token distributed via push or pull
+* fundsPendingWithdrawal: total balance distributed via pull that haven't been claimed yet
+* claimedPrincipalFunds: total amount of funds claimed by the principal recipient
+* pullBalances: individual pull balances that haven't been claimed yet
+
+When `_distributeFunds()` is called, we perform the following math (simplified to only include relevant updates):
+
+```solidity
+endingDistributedFunds = distributedFunds - fundsPendingWithdrawal + currentBalance;
+fundsToBeDistributed = endingDistributedFunds - distributedFunds;
+distributedFunds = endingDistributedFunds;
+```
+
+As we can see, `distributedFunds` is added to the `endingDistributedFunds` variable and then removed when calculating `fundsToBeDistributed`, having no impact on the resulting `fundsToBeDistributed` value.
+
+The `distributedFunds` variable is not read or used anywhere else on the contract.
+
+#### Recommendation
+
+We can simplify the math and save substantial gas (a storage write plus additional operations) by not tracking this value at all.
+
+This would allow us to calculate `fundsToBeDistributed` directly, as follows:
+
+```solidity
+fundsToBeDistributed = currentBalance - fundsPendingWithdrawal;
+```
+
+#### Review
+
+Fixed as recommended in [PR 85](https://github.com/ObolNetwork/obol-manager-contracts/pull/85).
+
+### \[I-01] Strong trust assumptions between validators and node operators
+
+It is assumed that validators and node operators will always act in the best interest of the group, rather than in their selfish best interest.
+
+It is important to make clear to users that there are strong trust assumptions between the various parties involved in the DVT.
+
+Here are a select few examples of attacks that a malicious set of node operators could perform:
+
+1. Since there is currently no mechanism for withdrawals besides the consensus of the node operators, a minority of them sufficient to withhold consensus could blackmail the principal for a payment of up to 16 ether in order to allow them to withdraw. Otherwise, they could turn off their node operators and force the principal to bleed down to a final withdrawn balance of just over 16 ether.
+2. Node operators are all able to propose blocks within the P2P network, which are then propogated out to the rest of the network. Node software is accustomed to signing for blocks built by block builders based on the metadata including quantity of fees and the address they'll be sent to. This is enforced by social consensus, with block builders not wanting to harm validators in order to have their blocks accepted in the future. However, node operators in a DVT are not concerned with the social consensus of the network, and could therefore build blocks that include large MEV payments to their personal address (instead of the DVT's 0xSplit), add fictious metadata to the block header, have their fellow node operators accept the block, and take the MEV for themselves.
+3. While the withdrawal address is immutably set on the beacon chain to the OWR, the fee address is added by the nodes to each block. Any majority of node operators sufficient to reach consensus could create a new 0xSplit with only themselves on it, and use that for all execution layer fees. The principal (and other node operators) would not be able to stop them or withdraw their principal, and would be stuck with staked funds paying fees to the malicious node operators.
+
+Note that there are likely many other possible attacks that malicious node operators could perform. This report is intended to demonstrate some examples of the trust level that is needed between validators and node operators, and to emphasize the importance of making these assumptions clear to users.
+
+#### Review
+
+Acknowledged. We believe EIP 7002 will reduce this trust assumption as it would enable the validator exit via the execution layer withdrawal key.
+
+### \[I-02] Provide node operator checklist to validate setup
+
+There are a number of ways that the user setting up the DVT could plant backdoors to harm the other users involved in the DVT.
+
+Each of these risks is possible to check before signing off on the setup, but some are rather hidden, so it would be useful for the protocol to provide a list of checks that node operators should do before signing off on the setup parameters (or, even better, provide these checks for them through the front end).
+
+1. Confirm that `SplitsMain.getHash(split)` matches the hash of the parameters that the user is expecting to be used.
+2. Confirm that the controller clone delegates to the correct implementation. If not, it could be pointed to delegate to `SplitMain` and then called to `transferControl()` to a user's own address, allowing them to update the split arbitrarily.
+3. `OptimisticWithdrawalRecipient.getTranches()` should be called to check that `amountOfPrincipalStake` is equal to the amount that they will actually be providing.
+4. The controller's `owner` and future split including Obol fees should be provided to the user. They should be able to check that `ImmutableSplitControllerFactory.predictSplitControllerAddress()`, with those parameters inputted, results in the controller that is actually listed on `SplitsMain.getController(split)`.
+
+#### Review
+
+Acknowledged. We do some of these already (will add the remainder) automatically in the launchpad UI during the cluster confirmation phase by the node operator. We will also add it in markdown to the repo.
diff --git a/docs/versioned_docs/version-v1.0.0/sec/threat_model.md b/docs/versioned_docs/version-v1.0.0/sec/threat_model.md
new file mode 100644
index 0000000000..f739b1dfa3
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/sec/threat_model.md
@@ -0,0 +1,155 @@
+---
+sidebar_position: 6
+description: Threat model for a Distributed Validator
+---
+
+# Charon threat model
+
+This page outlines a threat model for Charon, in the context of it being a Distributed Validator middleware for Ethereum validator clients.
+
+## Actors
+
+- Node owner (NO)
+- Cluster node operators (CNO)
+- Rogue node operator (RNO)
+- Outside attacker (OA)
+
+## General observations
+
+This page describes some considerations the Obol core team made about the security of a distributed validator in the context of its deployment and interaction with outside actors.
+
+The goal of this threat model is to provide transparency, but it is by no means a comprehensive audit or complete security reference. It’s a sharing of the experiences and thoughts we gained during the last few years building distributed validator technologies.
+
+While to the Beacon Chain, a distributed validator is seen in much the same way as a regular validator, and thus retains some of the same security considerations, Charon’s threat model is different from a validator client’s threat model because of its general design.
+
+While a validator client owns and operates on a set of validator private keys, the design of Charon allows its node operators to rarely (if ever) see the complete validator private keys, relying instead on modern cryptography to generate partial private key shares.
+
+An Ethereum distributed validator employs advanced signature primitives such that no operator ever handles the full validator private key in any standard lifecycle step: the [BLS digital signature scheme](https://en.wikipedia.org/wiki/BLS_digital_signature) employed by the Ethereum network allows distributed validators to individually sign a blob of data and then aggregate the resulting signatures in a transparent manner, never requiring any of the participating parties to know the full private key to do so.
+
+If the subset of the available Charon nodes is lower than a given threshold, the cluster is not able to continue with its duties.
+
+Given the collaborative nature of a Distributed Validator cluster, every operator must prioritize the liveliness and well-being of the cluster. Charon, at the moment of writing this page cannot reward and penalize operators within a cluster independently.
+
+This implies that Charon’s threat model can’t quite be equated to that of a single validator client, since they work on a different - albeit similar - set of security concepts.
+
+## Identity private key
+
+A distributed validator cluster is made up of a number of nodes, often run by a number of independent operators. For each DV cluster there’s a set of Ethereum validator private keys on which they want to validate on behalf of.
+
+Alongside those, each node (henceforth ‘operator’) holds an SECP256K1 identity private key, referred to as an ENR, that identifies their node to the other cluster operators’ nodes.
+
+Exfiltration of said private key could lead to possible impersonation from an outside attacker, possibly leading to intra-cluster peering issues, eclipse attack risks, and degraded validator performance.
+
+Charon client communication is handled via BFT consensus, which is able to tolerate a given number of misbehaving nodes up to a certain threshold: once this threshold is reached, the cluster is not able to continue with its lifecycle and loses liveness guarantees (the validator goes offline). If more than two-thirds of nodes in a cluster are malicious, a cluster also loses safety guarantees (enough bad actors could collude to come to consensus on something slashable).
+
+Identity private key theft and the subsequent execution of a rogue cluster node is equivalent in the context of BFT consensus to a misbehaving node, hence the cluster can survive and continue with its duties up to what’s specified by the cluster’s BFT protocol’s parameters.
+
+The likelihood of this happening is low: an OA with enough knowledge of the topology of the operator’s network must steal `fault tolerance of the cluster + 1` identity private keys and run Charon nodes to subvert the distributed validator BFT consensus to push the validator offline.
+
+## Ethereum validator private key access
+
+A distributed validator cluster executes Ethereum validator duties by acting as a middleman between the beacon chain and a validator client.
+
+To do so, the cluster must have knowledge of the Ethereum validator’s private key.
+
+The design and implementation of Charon minimizes the chances of this by splitting the Ethereum validator private keys into parts, which are then assigned to each node operator.
+A [distributed key generation](https://en.wikipedia.org/wiki/Distributed_key_generation) (DKG) process is used in order to evenly and safely create the private key shares without any central party having access to the full private key.
+
+The cryptography primitives employed in Charon can allow a threshold of the node operator’s private key shares to be reconstructed into the whole validator private key if needed.
+
+While the facilities to do this are present in the form of CLI commands, as stated before Charon never reconstructs the key in normal operations since BLS digital signature system allows for signature aggregation.
+
+A distributed validator cluster can be started in two ways:
+
+1. An existing Ethereum validator private key is split by the private key holder, and distributed in a trusted manner among the operators.
+2. The operators participate in a distributed key generation (DKG) process, to create private key shares that collectively can be used to sign validation duties as an Ethereum distributed validator. The full private key for the cluster never exists in one place during or after the DKG.
+
+In case 1, one of the node operators K has direct access to the Ethereum validator key and is tasked with the generation of other operator’s identity keys and key shards.
+
+It is clear that in this case the entirety of the sensitive material set is as secure as K’s environment; if K is compromised or malicious, the distributed validator could be slashed.
+
+Case 2 is different, because there’s no pre-existing Ethereum validator key in a single operator's hands: it will be generated using the FROST DKG algorithm.
+
+Assuming a successful DKG process, each operator will only ever handle its own key shares instead of the full Ethereum validator private key.
+
+A set of rogue operators composed of enough members to reconstruct the original Ethereum private keys might pose the risk of slashing for a distributed validator by colluding to produce slashable messages together.
+
+We deem this scenario’s likelihood as low, as it would mean that node operators decided to willfully slash the stake that they should be being rewarded for staking.
+
+Still, in the context of an outside attack, purposefully slashing a validator would mean stealing multiple operator key shares, which in turn means violating many cluster operator’s security almost at the same time. This scenario may occur if there is a 0-day vulnerability in a piece of software they all run or in case of node misconfiguration.
+
+## Rogue node operator
+
+Nodes are connected by means of either relay nodes, or directly to one another.
+
+Each node operator is at risk of being impeded by other nodes or by the relay operator in the execution of their duties.
+
+Nodes need to expose a set of TCP ports to be able to work, and the mere fact of doing that opens up the opportunity for rogue parties to execute DDoS attacks.
+
+Another attack surface for the cluster exists in rogue nodes purposefully filling the various inter-state databases with meaningless data, or more generally submitting bogus information to the other parties to slow down the processing or, in the case of a sybil attack, bring the cluster to a halt.
+
+The likelihood of this scenario is medium, because there’s no active threat hunting part: there’s no need for the rogue node operator to penetrate and compromise other nodes to disturb the cluster’s lifecycle.
+
+## Outside attackers interfering with a cluster
+
+There are two levels of sophistication in an OA:
+
+1. No knowledge of the topology of the cluster: The attacker doesn’t know where each cluster node is located and so can’t force fault tolerance +1 nodes offline if it can’t find them.
+2. Knowledge of the topology of the network (or part of it) is possessed: the OA can operate DDoS attacks or try breaking into node’s servers - at that point, the “rogue node operator” scenario applies.
+
+The likelihood of this scenario is low: an OA needs extensive capabilities and sufficient incentive to be able to carry out an attack of this size.
+
+An outside attacker could also find and use vulnerabilities in the underlying cryptosystems and cryptography libraries used by Charon and other Ethereum clients. Forging signatures that fool Charon’s cryptographic library or other dependencies may be feasible, but forging signatures or otherwise finding a vulnerability in either the SECP256K1+ECDSA or BLS12-381+BLS cryptosystems we deem to be a low likelihood risk.
+
+## Malicious beacon nodes
+
+A malicious beacon node (BN) could prevent the distributed validator from operating its validation duties, and could plausibly increase the likelihood of slashing by serving Charon illegitimate information.
+
+If the amount of nodes configured with the malicious BN are equal to the byzantine threshold for the Charon BFT consensus protocol, the validation process can potentially halt since the BFT parameter threshold is reached - most of the nodes are byzantine - the system will reach consensus on a set of data that isn’t valid.
+
+We deem the likelihood of this scenario to be medium depending on the trust model associated with the BNs deployment (cloud, self-hosted, SaaS product): node operators should always host or at least trust their own beacon nodes.
+
+## Malicious Charon relays
+
+A Charon relay is used as a communication bridge between nodes that aren’t directly exposed on the Internet. It also acts as the peer discovery mechanism for a cluster.
+
+Once a peer’s IP address has been discovered via the relay, a direct connection can be attempted. Nodes can either communicate by exchanging data through a relay, or by using the relay as a means to establish a direct TCP connection to one another.
+
+A malicious relay owned by a OA could lead to:
+
+- Network topology discovery, facilitating the “outside attackers interactions with a cluster” scenario
+- Impeding node communication, potentially impacting the BFT consensus protocol liveness (not security) and distributed validator duties
+- DKG process disruption leading to frustration and potential abandonment by node operators: could lead to the usage of a standard Ethereum validator setup, which implies weaker security overall
+
+We note that BFT consensus liveness disruption can only happen if the number of nodes using the malicious relay for communication is equal to the byzantine nodes amount defined in the consensus parameters.
+
+This risk can be mitigated by configuring nodes with multiple relay URLs from only [trusted entities](../advanced/self-relay.md).
+
+The likelihood of this scenario is medium: Charon nodes are configured with a default set of relay nodes, so if an OA were to compromise those, it would lead to many cluster topologies getting discovered and potentially attacked and disrupted.
+
+## Compromised runtime files
+
+Charon operates with two runtime files:
+
+- A lock file used to address operator’s nodes, define the Ethereum validator public keys and the public key shares associated with it
+- A cluster definition file used to define the operator’s addresses and identities during the DKG process
+
+The lock file is signed and validated by all the nodes participating in the cluster: assuming good security practices on the node operator side, and no bugs in Charon or its dependencies’ implementations, this scenario is unlikely.
+
+If one or more node operators are using less than ideal security practices an OA could rewire the Charon CLI flags to include the `--no-verify` flags, which disables lock file signature and hash verification (usually intended only for development purposes).
+
+By doing that, the OA can edit the lock file as it sees fit, leading to the “rogue node operator” scenario. An OA or RNO might also manage to social engineer their way into convincing other operators into running their malicious lock file with verification disabled.
+
+The likelihood of this scenario is low: an OA would need to compromise every node operator through social engineering to both use a different set of files, and to run its cluster with `--no-verify`.
+
+## Conclusions
+
+Distributed Validator Technology (DVT) helps maintain a high-assurance environment for Ethereum validators by leveraging modern cryptography to ensure no single point of failure is easily found in the system.
+
+As with any computing system, security considerations are to be expected in order to keep the environment safe.
+
+From the point of view of an Ethereum validator entity, running their services with a DV client can help greatly with availability, minimizing slashing risks, and maximizing participation in the network.
+
+On the other hand, one must take into consideration the risks involved with dishonest cluster operators, as well as rogue third-party beacon nodes or relay providers.
+
+In the end, we believe the benefits of DVT greatly outweigh the potential threats described in this overview.
diff --git a/docs/versioned_docs/version-v1.0.0/start/README.md b/docs/versioned_docs/version-v1.0.0/start/README.md
new file mode 100644
index 0000000000..9952b96485
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/start/README.md
@@ -0,0 +1,2 @@
+# start
+
diff --git a/docs/versioned_docs/version-v1.0.0/start/activate-dv.md b/docs/versioned_docs/version-v1.0.0/start/activate-dv.md
new file mode 100644
index 0000000000..af34d35fed
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/start/activate-dv.md
@@ -0,0 +1,37 @@
+---
+sidebar_position: 5
+description: Activate the Distributed Validator using the deposit contract
+---
+
+import Tabs from '@theme/Tabs';
+import TabItem from '@theme/TabItem';
+
+# Activate a DV
+
+If you have successfully created a distributed validator and you are ready to activate it, congratulations! 🎉
+
+Once you have connected all of your Charon clients together, synced all of your ethereum nodes such that the monitoring indicates that they are all healthy and ready to operate, **ONE operator** may proceed to deposit and activate the validator(s).
+
+The `deposit-data.json` to be used to deposit will be located in each operator's `.charon` folder. The copies across every node should be identical and any of them can be uploaded.
+
+:::warning
+If you are being given a `deposit-data.json` file that you didn't generate yourself, please take extreme care to ensure this operator has not given you a malicious `deposit-data.json` file that is not the one you expect. Cross reference the files from multiple operators if there is any doubt. Activating the wrong validator or an invalid deposit could result in complete theft or loss of funds.
+:::
+
+Use any of the following tools to deposit. Please use the third-party tools at your own risk and always double check the staking contract address.
+
+* Obol Distributed Validator Launchpad
+* ethereum.org Staking Launchpad
+* From a SAFE Multisig:
+(Repeat these steps for every validator to deposit in your cluster)
+ * From the SAFE UI, click on New Transaction
then Transaction Builder
to create a new custom transaction
+ * Enter the beacon chain contract for Deposit on mainnet - you can find it here
+ * Fill the transaction information
+ * Set amount to 32
in ETH
+ * Use your deposit-data.json
to fill the required data : pubkey
,withdrawal credentials
,signature
,deposit_data_root
. Make sure to prefix the input with 0x
to format them in bytes
+ * Click on Add transaction
+ * Click on Create Batch
+ * Click on Send Batch
, you can click on Simulate
to check if the transaction will execute successfully
+ * Get the minimum threshold of signatures from the other addresses and execute the custom transaction
+
+The activation process can take a minimum of 16 hours, with the maximum time to activation being dictated by the length of the activation queue, which can be weeks.
diff --git a/docs/versioned_docs/version-v1.0.0/start/quickstart-exit.md b/docs/versioned_docs/version-v1.0.0/start/quickstart-exit.md
new file mode 100644
index 0000000000..7412d8fe3a
--- /dev/null
+++ b/docs/versioned_docs/version-v1.0.0/start/quickstart-exit.md
@@ -0,0 +1,63 @@
+---
+sidebar_position: 7
+description: Exit a validator
+---
+
+# quickstart-exit
+
+import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; import CodeBlock from '@theme/CodeBlock';
+
+## Exit a DV
+
+Users looking to exit staking entirely and withdraw their full balance back must also sign and broadcast a "voluntary exit" message with validator keys which will start the process of exiting from staking. This is done with your validator client and submitted to your beacon node, and does not require gas. In the case of a DV, each Charon node needs to broadcast a partial exit to the other nodes of the cluster. Once a threshold of partial exits has been received by any node, the full voluntary exit will be sent to the beacon chain.
+
+This process will take 27 hours or longer depending on the current length of the exit queue.
+
+:::info
+
+* A threshold of operators needs to run the exit command for the exit to succeed.
+* If a Charon client restarts after the exit command is run but before the threshold is reached, it will lose the partial exits it has received from the other nodes. If all Charon clients restart and thus all partial exits are lost before the required threshold of exit messages are received, operators will have to rebroadcast their partial exit messages. :::
+
+### Run the `voluntary-exit` command on your validator client
+
+Run the appropriate command on your validator client to broadcast an exit message from your validator client to its upstream Charon client.
+
+It needs to be the validator client that is connected to your Charon client taking part in the DV, as you are only signing a partial exit message, with a partial private key share, which your Charon client will combine with the other partial exit messages from the other operators.
+
+:::info
+
+* All operators need to use the same `EXIT_EPOCH` for the exit to be successful. Assuming you want to exit as soon as possible, the default epochs included in the below commands should be sufficient for the respective network.
+* Partial exits can be broadcasted by any validator client as long as the sum reaches the threshold for the cluster. :::
+
+```
+
+ {String.raw`docker exec -it charon-distributed-validator-node-teku-1 /opt/teku/bin/teku voluntary-exit \
+ --beacon-node-api-endpoint="http://charon:3600/" \
+ --confirmation-enabled=false \
+ --validator-keys="/opt/charon/validator_keys:/opt/charon/validator_keys" \
+ --epoch=256`}
+
+
+```
+
+The following executes an interactive command inside the Nimbus VC container. It copies all files and directories from the Keystore path `/home/user/data/charon` to the newly created `/home/user/data/wd` directory.\
+\
+
+
+```
+
+ {String.raw`docker exec -it charon-distributed-validator-node-nimbus-1 /bin/bash -c ' \
+ mkdir /home/user/data/wd cp -r /home/user/data/charon/ /home/user/data/wd /home/user/nimbus_beacon_node deposits exit --all --epoch=256 --rest-url=http://charon:3600/ --data-dir=/home/user/data/wd/'`}
{String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 node /usr/app/packages/cli/bin/lodestar validator voluntary-exit \ --beaconNodes="http://charon:3600" \ --dataDir=/opt/data \ --exitEpoch=256 \ --network=holesky \ --yes`}
{String.raw`docker exec -it charon-distributed-validator-node-lighthouse-1 /bin/bash -c '\ for file in /opt/charon/keys/*; do \ filename=$(basename $file); if [[ $filename == *".json"* ]]; then `} {String.raw` keystore=$`} {String.raw`{filename%.*}; `} {String.raw`lighthouse account validator exit \ --beacon-node http://charon:3600 \ --keystore /opt/charon/keys/$keystore.json \ --network holesky \ --password-file /opt/charon/keys/$keystore.txt \ --no-confirmation \ --no-wait; fi; done;'`}
{String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit active-validator-list \--beacon-node-endpoints="http://lighthouse:5052"'`}
Then a signed partial exit for validator can be submitted by: {String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit sign \--beacon-node-endpoints="http://lighthouse:5052" \--validator-public-key="" \--publish-timeout="5m"'`}
After a sufficient amount of signed partial exits from node operators in the cluster is cumulated, a full (complete) exit is created. The threshold is the same as the one submitted during the cluster creation. After a full exit message is created, any operator from the cluster can broadcast it to the beacon chain with: {String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit broadcast \--beacon-node-endpoints="http://lighthouse:5052" \--validator-public-key="" \--publish-timeout="5m"'`}
{String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 node /usr/app/packages/cli/bin/lodestar validator voluntary-exit \ --beaconNodes="http://charon:3600" \ --dataDir=/opt/data \ --exitEpoch=162304 \ --network=goerli \ --yes`}
{String.raw`docker exec -it charon-distributed-validator-node-lighthouse-1 /bin/bash -c '\ for file in /opt/charon/keys/*; do \ filename=$(basename $file); if [[ $filename == *".json"* ]]; then `} {String.raw` keystore=$`} {String.raw`{filename%.*}; `} {String.raw`lighthouse account validator exit \ --beacon-node http://charon:3600 \ --keystore /opt/charon/keys/$keystore.json \ --network goerli \ --password-file /opt/charon/keys/$keystore.txt \ --no-confirmation \ --no-wait; fi; done;'`}
{String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit active-validator-list \--beacon-node-endpoints="http://lighthouse:5052"'`}
Then a signed partial exit for validator can be submitted by: {String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit sign \--beacon-node-endpoints="http://lighthouse:5052" \--validator-public-key="" \--publish-timeout="5m"'`}
After a sufficient amount of signed partial exits from node operators in the cluster is cumulated, a full (complete) exit is created. The threshold is the same as the one submitted during the cluster creation. After a full exit message is created, any operator from the cluster can broadcast it to the beacon chain with: {String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit broadcast \--beacon-node-endpoints="http://lighthouse:5052" \--validator-public-key="" \--publish-timeout="5m"'`}
{String.raw`docker exec -it charon-distributed-validator-node-lodestar-1 node /usr/app/packages/cli/bin/lodestar validator voluntary-exit \ --beaconNodes="http://charon:3600" \ --dataDir=/opt/data \ --exitEpoch=194048 \ --network=mainnet \ --yes`}
{String.raw`docker exec -it charon-distributed-validator-node-lighthouse-1 /bin/bash -c '\ for file in /opt/charon/keys/*; do \ filename=$(basename $file); if [[ $filename == *".json"* ]]; then `} {String.raw` keystore=$`} {String.raw`{filename%.*}; `} {String.raw`lighthouse account validator exit \ --beacon-node http://charon:3600 \ --keystore /opt/charon/keys/$keystore.json \ --network mainnet \ --password-file /opt/charon/keys/$keystore.txt \ --no-confirmation \ --no-wait; fi; done;'`}
{String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit active-validator-list \--beacon-node-endpoints="http://lighthouse:5052"'`}
Then a signed partial exit for validator can be submitted by: {String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit sign \--beacon-node-endpoints="http://lighthouse:5052" \--validator-public-key="" \--publish-timeout="5m"'`}
After a sufficient amount of signed partial exits from node operators in the cluster is cumulated, a full (complete) exit is created. The threshold is the same as the one submitted during the cluster creation. After a full exit message is created, any operator from the cluster can broadcast it to the beacon chain with: {String.raw`docker exec -it charon-distributed-validator-node-charon-1 /bin/sh -c 'charon exit broadcast \--beacon-node-endpoints="http://lighthouse:5052" \--validator-public-key="" \--publish-timeout="5m"'`}
+
+ cd charon-distributed-validator-node
+
+
+
+
+ cd charon-distributed-validator-cluster
+
+
+