Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NetworkDatabase impl with new ssv_types system #67

Merged
merged 59 commits into from
Jan 9, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
1f58607
new types
Zacholme7 Dec 5, 2024
017d287
From for operator conversion from db, operator db functionality & tests
Zacholme7 Dec 5, 2024
07e49b4
all tables added, operator tests fixed, share operations added, clust…
Zacholme7 Dec 5, 2024
c14a2f4
move metadata to cluster, insertion test passing, fix cascade deletio…
Zacholme7 Dec 5, 2024
594bf36
proper error, migrate table to sql file
Zacholme7 Dec 6, 2024
1ac1b16
more testing utils, cluster deletion cascade test passing
Zacholme7 Dec 6, 2024
5bddc0b
potential memory stores
Zacholme7 Dec 6, 2024
ed67bd5
top level SQL statment defs with prepare cached
Zacholme7 Dec 6, 2024
d346977
simplify member insertion
Zacholme7 Dec 6, 2024
01fee71
flesh out some helpers
Zacholme7 Dec 6, 2024
c6da601
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 Dec 6, 2024
debf324
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 Dec 9, 2024
a1ca54e
migrate from rsa to openssl for rsa keys
Zacholme7 Dec 9, 2024
54a19da
migrate and fix tests
Zacholme7 Dec 9, 2024
18af780
state store rebuild mvp
Zacholme7 Dec 11, 2024
c857e1c
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 Dec 11, 2024
1652528
restructure test
Zacholme7 Dec 11, 2024
e99b7ef
refactor entire test utilities, setup generalized testing framework
Zacholme7 Dec 11, 2024
05650ab
removed unused code
Zacholme7 Dec 11, 2024
6a39998
validator tests
Zacholme7 Dec 11, 2024
2c84a52
clippy fix
Zacholme7 Dec 11, 2024
896117e
merge types
Zacholme7 Dec 12, 2024
897a318
database with pubkey
Zacholme7 Dec 12, 2024
daed0c1
validator metadata insertion
Zacholme7 Dec 12, 2024
60438e2
more tests and general functionality
Zacholme7 Dec 12, 2024
32c179a
fix and test block processing
Zacholme7 Dec 12, 2024
05a4c24
additional tests & bugfix on validator generation
Zacholme7 Dec 13, 2024
f8999b6
break up assertions, basic comments
Zacholme7 Dec 13, 2024
9e4f289
migrate to immutable api with fine grained state locking
Zacholme7 Dec 13, 2024
f358ce5
fmt and clippy
Zacholme7 Dec 13, 2024
7a855a0
load in block number even if we have not found id
Zacholme7 Dec 16, 2024
9fd6c5e
merge
Zacholme7 Dec 17, 2024
95731fa
cargo sort
Zacholme7 Dec 17, 2024
de5be8d
mvp multi index map
Zacholme7 Dec 19, 2024
69652e7
type rework
Zacholme7 Dec 19, 2024
487a200
multi index map integration, type rewrite integration, start on test fix
Zacholme7 Dec 19, 2024
90ae6c5
integrate all tests
Zacholme7 Dec 21, 2024
c5d1ff1
fix up testing
Zacholme7 Dec 21, 2024
35a79ec
clusterId to bytes 32
Zacholme7 Dec 21, 2024
aabbcdf
lints
Zacholme7 Dec 21, 2024
a9ac7a5
clippy
Zacholme7 Dec 21, 2024
3ff12ed
make multistate pub
Zacholme7 Dec 21, 2024
c61becd
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 Dec 23, 2024
5635838
re-export and save all metadata and clusters
Zacholme7 Dec 23, 2024
d2648d1
clean getters for multi state
Zacholme7 Dec 23, 2024
e69fb3a
rebuild all clusters and share-metadata information upon restart
Zacholme7 Dec 23, 2024
431d4eb
remove print
Zacholme7 Dec 23, 2024
43b2e71
fix state reconstruction
Zacholme7 Dec 24, 2024
5b653dd
remove print
Zacholme7 Dec 24, 2024
a147516
error msg fix
Zacholme7 Jan 2, 2025
4e77bd0
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 Jan 3, 2025
ebd79ab
merge & update
Zacholme7 Jan 3, 2025
c8eb685
spelling and formatting
Zacholme7 Jan 3, 2025
10f014f
nonce logic
Zacholme7 Jan 6, 2025
1f246e6
nonce insertion fix and tests
Zacholme7 Jan 6, 2025
661c148
spelling
Zacholme7 Jan 7, 2025
5b874cf
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 Jan 8, 2025
e20b00f
Merge branch 'clean-newtypes-database' of github.com:Zacholme7/anchor…
Zacholme7 Jan 8, 2025
5a5c7a1
initial README draft
Zacholme7 Jan 8, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
376 changes: 208 additions & 168 deletions Cargo.lock

Large diffs are not rendered by default.

5 changes: 5 additions & 0 deletions Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@ members = [
"anchor/common/qbft",
"anchor/common/ssv_types",
"anchor/common/version",
"anchor/common/version",
"anchor/database",
"anchor/http_api",
"anchor/http_metrics",
"anchor/network",
Expand All @@ -21,6 +23,7 @@ client = { path = "anchor/client" }
qbft = { path = "anchor/common/qbft" }
http_api = { path = "anchor/http_api" }
http_metrics = { path = "anchor/http_metrics" }
database = { path = "anchor/database" }
network = { path = "anchor/network" }
version = { path = "anchor/common/version" }
processor = { path = "anchor/processor" }
Expand Down Expand Up @@ -57,7 +60,9 @@ tokio = { version = "1.39.2", features = [
tracing = "0.1.40"
tracing-subscriber = { version = "0.3.18", features = ["fmt", "env-filter"] }
base64 = "0.22.1"
rusqlite = "0.28.0"
openssl = "0.10.68"
dashmap = "6.1.0"

[profile.maxperf]
inherits = "release"
Expand Down
1 change: 1 addition & 0 deletions anchor/common/ssv_types/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -8,4 +8,5 @@ authors = ["Sigma Prime <[email protected]>"]
base64 = { workspace = true }
derive_more = { workspace = true }
openssl = { workspace = true }
rusqlite = { workspace = true }
types = { workspace = true }
36 changes: 19 additions & 17 deletions anchor/common/ssv_types/src/cluster.rs
Original file line number Diff line number Diff line change
@@ -1,36 +1,40 @@
use crate::OperatorId;
use crate::Share;
use derive_more::{Deref, From};
use std::collections::HashSet;
use types::{Address, Graffiti, PublicKey};

/// Unique identifier for a cluster
#[derive(Clone, Copy, Debug, Default, Eq, PartialEq, Hash, From, Deref)]
pub struct ClusterId(pub u64);
pub struct ClusterId(pub [u8; 32]);

/// A Cluster is a group of Operators that are acting on behalf of a Validator
/// A Cluster is a group of Operators that are acting on behalf of one or more Validators
///
/// Each cluster is owned by a unqiue EOA and only that Address may perform operators on the
/// Cluster.
#[derive(Debug, Clone)]
pub struct Cluster {
/// Unique identifier for a Cluster
pub cluster_id: ClusterId,
/// All of the members of this Cluster
pub cluster_members: Vec<ClusterMember>,
/// The owner of the cluster and all of the validators
pub owner: Address,
/// The Eth1 fee address for all validators in the cluster
pub fee_recipient: Address,
/// The number of faulty operator in the Cluster
pub faulty: u64,
/// If the Cluster is liquidated or active
pub liquidated: bool,
/// Metadata about the validator this committee represents
pub validator_metadata: ValidatorMetadata,
/// Operators in this cluster
pub cluster_members: HashSet<OperatorId>,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be a Committee?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think committee is too broad of a term here. A committee just refers to a general set of operators while here the cluster_members are meant to be the direct members of the cluster. Its trying to enforce the relationship. But, there is also an argument that committee is also applicable.

}

/// A member of a Cluster. This is just an Operator that holds onto a share of the Validator key
/// A member of a Cluster.
/// This is an Operator that holds a piece of the keyshare for each validator in the cluster
#[derive(Debug, Clone)]
pub struct ClusterMember {
/// Unique identifier for the Operator this member represents
pub operator_id: OperatorId,
/// Unique identifier for the Cluster this member is a part of
pub cluster_id: ClusterId,
/// The Share this member is responsible for
pub share: Share,
}

/// Index of the validator in the validator registry.
Expand All @@ -40,14 +44,12 @@ pub struct ValidatorIndex(pub usize);
/// General Metadata about a Validator
#[derive(Debug, Clone)]
pub struct ValidatorMetadata {
/// Index of the validator
pub validator_index: ValidatorIndex,
/// Public key of the validator
pub validator_pubkey: PublicKey,
/// Eth1 fee address
pub fee_recipient: Address,
pub public_key: PublicKey,
/// The cluster that is responsible for this validator
pub cluster_id: ClusterId,
/// Index of the validator
pub index: ValidatorIndex,
/// Graffiti
pub graffiti: Graffiti,
/// The owner of the validator
pub owner: Address,
}
1 change: 1 addition & 0 deletions anchor/common/ssv_types/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,5 @@ pub use share::Share;
mod cluster;
mod operator;
mod share;
mod sql_conversions;
mod util;
7 changes: 7 additions & 0 deletions anchor/common/ssv_types/src/share.rs
Original file line number Diff line number Diff line change
@@ -1,8 +1,15 @@
use crate::{ClusterId, OperatorId};
use types::PublicKey;

/// One of N shares of a split validator key.
#[derive(Debug, Clone)]
pub struct Share {
/// Public Key of the validator
pub validator_pubkey: PublicKey,
/// Operator this share belongs to
pub operator_id: OperatorId,
/// Cluster the operator who owns this share belongs to
pub cluster_id: ClusterId,
/// The public key of this Share
pub share_pubkey: PublicKey,
/// The encrypted private key of the share
Expand Down
160 changes: 160 additions & 0 deletions anchor/common/ssv_types/src/sql_conversions.rs
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
use crate::{Cluster, ClusterId, ClusterMember};
use crate::{Operator, OperatorId};
use crate::{Share, ValidatorIndex, ValidatorMetadata};
use base64::prelude::*;
use openssl::rsa::Rsa;
use rusqlite::{types::Type, Error as SqlError, Row};
use std::io::{Error, ErrorKind};
use std::str::FromStr;
use types::{Address, Graffiti, PublicKey, GRAFFITI_BYTES_LEN};

// Helper for converting to Rustqlite Error
fn from_sql_error<E: std::error::Error + Send + Sync + 'static>(
col: usize,
t: Type,
e: E,
) -> SqlError {
SqlError::FromSqlConversionFailure(col, t, Box::new(e))
}

// Conversion from SQL row to an Operator
impl TryFrom<&Row<'_>> for Operator {
type Error = rusqlite::Error;
fn try_from(row: &Row) -> Result<Self, Self::Error> {
// Get the OperatorId from column 0
let id: OperatorId = OperatorId(row.get(0)?);

// Get the public key from column 1
let pem_string = row.get::<_, String>(1)?;
let decoded_pem = BASE64_STANDARD
.decode(pem_string)
.map_err(|e| from_sql_error(1, Type::Text, e))?;
let rsa_pubkey =
Rsa::public_key_from_pem(&decoded_pem).map_err(|e| from_sql_error(1, Type::Text, e))?;

// Get the owner from column 2
let owner_str = row.get::<_, String>(2)?;
let owner = Address::from_str(&owner_str).map_err(|e| from_sql_error(2, Type::Text, e))?;

Ok(Operator {
id,
rsa_pubkey,
owner,
})
}
}

// Conversion from SQL row and cluster members into a Cluster
impl TryFrom<(&Row<'_>, Vec<ClusterMember>)> for Cluster {
type Error = rusqlite::Error;

fn try_from(
(row, cluster_members): (&Row<'_>, Vec<ClusterMember>),
) -> Result<Self, Self::Error> {
// Get ClusterId from column 0
let cluster_id = ClusterId(row.get(0)?);

// Get the owner from column 1
let owner_str = row.get::<_, String>(1)?;
let owner = Address::from_str(&owner_str).map_err(|e| from_sql_error(1, Type::Text, e))?;

// Get the fee_recipient from column 2
let fee_recipient_str = row.get::<_, String>(2)?;
let fee_recipient =
Address::from_str(&fee_recipient_str).map_err(|e| from_sql_error(2, Type::Text, e))?;

// Get faulty count from column 3
let faulty: u64 = row.get(3)?;

// Get liquidated status from column 4
let liquidated: bool = row.get(4)?;

Ok(Cluster {
cluster_id,
owner,
fee_recipient,
faulty,
liquidated,
cluster_members: cluster_members
.into_iter()
.map(|member| member.operator_id)
.collect(),
})
}
}

// Conversion from SQL row to a ClusterMember
impl TryFrom<&Row<'_>> for ClusterMember {
type Error = rusqlite::Error;

fn try_from(row: &Row) -> Result<Self, Self::Error> {
// Get ClusterId from column 0
let cluster_id = ClusterId(row.get(0)?);

// Get OperatorId from column 1
let operator_id = OperatorId(row.get(1)?);

Ok(ClusterMember {
operator_id,
cluster_id,
})
}
}

// Conversion from SQL row to ValidatorMetadata
impl TryFrom<&Row<'_>> for ValidatorMetadata {
type Error = SqlError;
fn try_from(row: &Row) -> Result<Self, Self::Error> {
// Get public key from column 0
let validator_pubkey_str = row.get::<_, String>(0)?;
let public_key = PublicKey::from_str(&validator_pubkey_str)
.map_err(|e| from_sql_error(1, Type::Text, Error::new(ErrorKind::InvalidInput, e)))?;

// Get ClusterId from column 1
let cluster_id: ClusterId = ClusterId(row.get(1)?);

// Get ValidatorIndex from column 2
let index: ValidatorIndex = ValidatorIndex(row.get(2)?);

// Get Graffiti from column 3
let graffiti = Graffiti(row.get::<_, [u8; GRAFFITI_BYTES_LEN]>(3)?);

Ok(ValidatorMetadata {
public_key,
cluster_id,
index,
graffiti,
})
}
}

// Conversion from SQL row into a Share
impl TryFrom<&Row<'_>> for Share {
type Error = rusqlite::Error;
fn try_from(row: &Row) -> Result<Self, Self::Error> {
// Get Share PublicKey from column 0
let share_pubkey_str = row.get::<_, String>(0)?;
let share_pubkey = PublicKey::from_str(&share_pubkey_str)
.map_err(|e| from_sql_error(0, Type::Text, Error::new(ErrorKind::InvalidInput, e)))?;

// Get the encrypted private key from column 1
let encrypted_private_key: [u8; 256] = row.get(1)?;

// Get the OperatorId from column 2 and ClusterId from column 3
let operator_id = OperatorId(row.get(2)?);
let cluster_id = ClusterId(row.get(3)?);

// Get the Validator PublicKey from column 4
let validator_pubkey_str = row.get::<_, String>(4)?;
let validator_pubkey = PublicKey::from_str(&validator_pubkey_str)
.map_err(|e| from_sql_error(4, Type::Text, Error::new(ErrorKind::InvalidInput, e)))?;

Ok(Share {
validator_pubkey,
operator_id,
cluster_id,
share_pubkey,
encrypted_private_key,
})
}
}
20 changes: 20 additions & 0 deletions anchor/database/Cargo.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
[package]
name = "database"
version = "0.1.0"
edition = { workspace = true }
authors = ["Sigma Prime <[email protected]>"]

[dependencies]
base64 = { workspace = true }
dashmap = { workspace = true }
openssl = { workspace = true }
parking_lot = { workspace = true }
r2d2 = "0.8.10"
r2d2_sqlite = "0.21.0"
rusqlite = { workspace = true }
ssv_types = { workspace = true }
types = { workspace = true }

[dev-dependencies]
rand = "0.8.5"
tempfile = "3.14.0"
64 changes: 64 additions & 0 deletions anchor/database/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
# Anchor Database

The Anchor Database provides a robust persistent and in-memory caching layer for the Anchor project, specifically designed to handle SSV Network data efficiently. This crate manages both persistent storage of blockchain event data and high-performance in-memory access patterns.

## Table of Contents

1. [Overview](#overview)
2. [Core Features](#core)
3. [Architecture](#Architecture)
4. [Data Models](#Data)

## Overview

The Anchor Database serves as the backbone for storing and accessing SSV Network event data. When an Anchor node starts up, it needs to process and store blockchain event logs to maintain state.

## Core Features
* **Persistent Storage**: SQLite-based store with automatic schema management
* **In-Memory Caching**: Efficient caching of frequently accessed data
* **Multi-Index Access**: Flexible data access patters through multiple different keys
* **Automatic State Recovery**: Rebuilds in-memory state from persistent storage on startup.
* **Thread Safety**: Concurrent access support through DashMap implementations


## Architecture
The database architecture consists of a two key layers

### Storage Layer

At the foundation lies a SQLite database that provides persistent storage. This layer encompasses
* **Database Connection Management**: A conneciton pool that maintains and resuses SQLite connections efficiently, preventing resource exhaustion while ensuring consistent access
* **Schema and Transaction Management**: Automatic table creation and transaction support for data integrity


### Cache Layer
The in-memory cache layer combines high-performance caching with sophisticated indexing through a unified system. Is is broken up into Single-State and Multi-State.

* **Single State**: Single state handles straightforward, one-to-one relationships where data only needs one access pattern. This is ideal for data that is frequenlty access but has simple relationships.
* **Multi State**: Multi State handles complex relationships where the same data needs to be accessed through different keys. This is implemented through a series of MultiIndexMaps, each supporting three different access patterns for the same data. The type system enforces correct usage through the UniqueTag and NonUniqueTag markers, preventing incorrect access patterns at compile time. Each MultiIndexMap in the Multi State provides three ways to access its data:
1) A primary key that uniquely identifies each piece of data
2) A secondary key that can either uniquely identify data or map to multiple items
3) A tertiary key that can also be unique or map to multiple items

## Data Models
The database handles several core data types

**Operator**
* Represents a network operator
* Identified by OperatorId
* Constains RSA public key and owner address

**Cluster**
* Represents a group of Operators managing validators
* Contains cluster membership information
* Tracks operational status and fault counts

**Validator**
* Contains validator metadata
* Links to cluster membership
* Stores configuration data

**Share**
* Represents cryptographic shares for validators
* Links operators to validators
* Contains encrypted key data
Loading
Loading