Skip to content

Commit d97c4b9

Browse files
Update split-storage-archival.md
1 parent 84fa824 commit d97c4b9

File tree

1 file changed

+5
-53
lines changed

1 file changed

+5
-53
lines changed

docs/archival/split-storage-archival.md

Lines changed: 5 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -19,14 +19,14 @@ With the 1.35.0 release we are rolling out split storage, a new option for stora
1919
In split storage mode, the Near Client only works with a smaller hot database to produce blocks, which results in increased performance. Cold database is not accessed for reads during normal block production and is only read when historical data is specifically requested. Thus, we recommend keeping the cold database on cheaper and slower drives such as an HDD and only optimize speed for the hot database, which is about 10 times smaller.
2020

2121
Split storage is disabled by default, and can be enabled with a config change and going through a migration process that requires manual steps. We have several choices for the migration:
22-
* Use Pagoda-provided S3 snapshots of nodes that have split-storge configured.
22+
* Use snapshots (e.g. FASTNEAR snapshot) of nodes that have split-storge configured.
2323
* Do a manual migration using S3 snapshots of the existing RPC single database
2424
* Do a manual migration using your own buddy RPC node
2525

2626
| Migration option | Pros | Cons |
2727
| --------------------------------------------------- | ------------------------------- | ------------------------------------------------------ |
28-
| Pagoda-provided S3 snapshots of split-storage nodes | Fastest. Little to no downtime. | Requires trust in migration performed by Pagoda nodes |
29-
| Manual migration + S3 RPC snapshots | No need for extra node. Cold storage is initialized in a trustless way. | Requires trust in Pagoda RPC snapshots. The node will be out of sync at the end of the migration and will need to block sync several epochs. Migration takes days and you cannot restart your node during that time. |
28+
| Snapshots of split-storage nodes | Fastest. Little to no downtime. | Requires trust in Snapshot provider |
29+
| Manual migration + S3 RPC snapshots | No need for extra node. Cold storage is initialized in a trustless way. | Requires trust in RPC snapshot provider. The node will be out of sync at the end of the migration and will need to block sync several epochs. Migration takes days and you cannot restart your node during that time. |
3030
| Manual migration + your own node | Trustless. Little to no downtime | Requires an extra RPC node with bigger storage. Migration takes days and you cannot restart your node during that time. |
3131

3232
## Important config fields {#config}
@@ -66,57 +66,9 @@ Example:
6666
}
6767
```
6868

69-
## Using Pagoda-provided S3/CloudFrontsplit-storage snapshots {#S3 migration}
69+
## Using Snaphots {#S3 migration}
7070

71-
Prerequisite:
72-
73-
Recommended download client [`rclone`](https://rclone.org).
74-
This tool is present in many Linux distributions. There is also a version for Windows.
75-
And its main merit is multithread.
76-
You can [read about it on](https://rclone.org)
77-
** rclone version needs to be v1.66.0 or higher
78-
79-
First, install rclone:
80-
```
81-
$ sudo -v ; curl https://rclone.org/install.sh | sudo bash
82-
```
83-
Next, prepare config, so you don't need to specify all the parameters interactively:
84-
```
85-
mkdir -p ~/.config/rclone
86-
touch ~/.config/rclone/rclone.conf
87-
```
88-
89-
, and paste exactly the following config into `rclone.conf`:
90-
```
91-
[near_cf]
92-
type = s3
93-
provider = AWS
94-
download_url = https://dcf58hz8pnro2.cloudfront.net/
95-
acl = public-read
96-
server_side_encryption = AES256
97-
region = ca-central-1
98-
99-
```
100-
101-
1. Find latest snapshot
102-
```bash
103-
chain=testnet/mainnet
104-
rclone copy --no-check-certificate near_cf://near-protocol-public/backups/${chain:?}/archive/latest_split_storage ./
105-
latest=$(cat latest_split_storage)
106-
latest=$(cat latest_split_storage)
107-
```
108-
2. Download cold and hot databases
109-
```bash
110-
NEAR_HOME=/home/ubuntu/.near
111-
rclone copy --no-check-certificate --progress --transfers=6 --checkers=6 \
112-
near_cf://near-protocol-public/backups/${chain:?}/archive/${latest:?} $NEAR_HOME
113-
```
114-
This will download cold database to `$NEAR_HOME/cold-data`, and hot database to `$NEAR_HOME/hot-data`
115-
3. Make changes to your config.json
116-
```bash
117-
cat <<< $(jq '.save_trie_changes = true | .cold_store = .store | .cold_store.path = "cold-data" | .store.path = "hot-data" | .split_storage.enable_split_storage_view_client = true' $NEAR_HOME/config.json) > $NEAR_HOME/config.json
118-
```
119-
4. Restart the node
71+
FASTNEAR is now provider of Archival node snapshot. Please refer to their documentation [HERE](https://docs.fastnear.com/docs/snapshots/mainnet#archival-mainnet-snapshot)
12072

12173
### If your node is out of sync after restarting from the latest snapshot {#syncing node}
12274
Downloading the cold database can take a long time (days). In this time your database can become very far behind the chain.

0 commit comments

Comments
 (0)