-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NetworkDatabase
impl with new ssv_types
system
#67
Merged
Merged
Changes from all commits
Commits
Show all changes
59 commits
Select commit
Hold shift + click to select a range
1f58607
new types
Zacholme7 017d287
From for operator conversion from db, operator db functionality & tests
Zacholme7 07e49b4
all tables added, operator tests fixed, share operations added, clust…
Zacholme7 c14a2f4
move metadata to cluster, insertion test passing, fix cascade deletio…
Zacholme7 594bf36
proper error, migrate table to sql file
Zacholme7 1ac1b16
more testing utils, cluster deletion cascade test passing
Zacholme7 5bddc0b
potential memory stores
Zacholme7 ed67bd5
top level SQL statment defs with prepare cached
Zacholme7 d346977
simplify member insertion
Zacholme7 01fee71
flesh out some helpers
Zacholme7 c6da601
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 debf324
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 a1ca54e
migrate from rsa to openssl for rsa keys
Zacholme7 54a19da
migrate and fix tests
Zacholme7 18af780
state store rebuild mvp
Zacholme7 c857e1c
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 1652528
restructure test
Zacholme7 e99b7ef
refactor entire test utilities, setup generalized testing framework
Zacholme7 05650ab
removed unused code
Zacholme7 6a39998
validator tests
Zacholme7 2c84a52
clippy fix
Zacholme7 896117e
merge types
Zacholme7 897a318
database with pubkey
Zacholme7 daed0c1
validator metadata insertion
Zacholme7 60438e2
more tests and general functionality
Zacholme7 32c179a
fix and test block processing
Zacholme7 05a4c24
additional tests & bugfix on validator generation
Zacholme7 f8999b6
break up assertions, basic comments
Zacholme7 9e4f289
migrate to immutable api with fine grained state locking
Zacholme7 f358ce5
fmt and clippy
Zacholme7 7a855a0
load in block number even if we have not found id
Zacholme7 9fd6c5e
merge
Zacholme7 95731fa
cargo sort
Zacholme7 de5be8d
mvp multi index map
Zacholme7 69652e7
type rework
Zacholme7 487a200
multi index map integration, type rewrite integration, start on test fix
Zacholme7 90ae6c5
integrate all tests
Zacholme7 c5d1ff1
fix up testing
Zacholme7 35a79ec
clusterId to bytes 32
Zacholme7 aabbcdf
lints
Zacholme7 a9ac7a5
clippy
Zacholme7 3ff12ed
make multistate pub
Zacholme7 c61becd
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 5635838
re-export and save all metadata and clusters
Zacholme7 d2648d1
clean getters for multi state
Zacholme7 e69fb3a
rebuild all clusters and share-metadata information upon restart
Zacholme7 431d4eb
remove print
Zacholme7 43b2e71
fix state reconstruction
Zacholme7 5b653dd
remove print
Zacholme7 a147516
error msg fix
Zacholme7 4e77bd0
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 ebd79ab
merge & update
Zacholme7 c8eb685
spelling and formatting
Zacholme7 10f014f
nonce logic
Zacholme7 1f246e6
nonce insertion fix and tests
Zacholme7 661c148
spelling
Zacholme7 5b874cf
Merge branch 'unstable' into clean-newtypes-database
Zacholme7 e20b00f
Merge branch 'clean-newtypes-database' of github.com:Zacholme7/anchor…
Zacholme7 5a5c7a1
initial README draft
Zacholme7 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -8,4 +8,5 @@ authors = ["Sigma Prime <[email protected]>"] | |
base64 = { workspace = true } | ||
derive_more = { workspace = true } | ||
openssl = { workspace = true } | ||
rusqlite = { workspace = true } | ||
types = { workspace = true } |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -4,4 +4,5 @@ pub use share::Share; | |
mod cluster; | ||
mod operator; | ||
mod share; | ||
mod sql_conversions; | ||
mod util; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,160 @@ | ||
use crate::{Cluster, ClusterId, ClusterMember}; | ||
use crate::{Operator, OperatorId}; | ||
use crate::{Share, ValidatorIndex, ValidatorMetadata}; | ||
use base64::prelude::*; | ||
use openssl::rsa::Rsa; | ||
use rusqlite::{types::Type, Error as SqlError, Row}; | ||
use std::io::{Error, ErrorKind}; | ||
use std::str::FromStr; | ||
use types::{Address, Graffiti, PublicKey, GRAFFITI_BYTES_LEN}; | ||
|
||
// Helper for converting to Rustqlite Error | ||
fn from_sql_error<E: std::error::Error + Send + Sync + 'static>( | ||
col: usize, | ||
t: Type, | ||
e: E, | ||
) -> SqlError { | ||
SqlError::FromSqlConversionFailure(col, t, Box::new(e)) | ||
} | ||
|
||
// Conversion from SQL row to an Operator | ||
impl TryFrom<&Row<'_>> for Operator { | ||
type Error = rusqlite::Error; | ||
fn try_from(row: &Row) -> Result<Self, Self::Error> { | ||
// Get the OperatorId from column 0 | ||
let id: OperatorId = OperatorId(row.get(0)?); | ||
|
||
// Get the public key from column 1 | ||
let pem_string = row.get::<_, String>(1)?; | ||
let decoded_pem = BASE64_STANDARD | ||
.decode(pem_string) | ||
.map_err(|e| from_sql_error(1, Type::Text, e))?; | ||
let rsa_pubkey = | ||
Rsa::public_key_from_pem(&decoded_pem).map_err(|e| from_sql_error(1, Type::Text, e))?; | ||
|
||
// Get the owner from column 2 | ||
let owner_str = row.get::<_, String>(2)?; | ||
let owner = Address::from_str(&owner_str).map_err(|e| from_sql_error(2, Type::Text, e))?; | ||
|
||
Ok(Operator { | ||
id, | ||
rsa_pubkey, | ||
owner, | ||
}) | ||
} | ||
} | ||
|
||
// Conversion from SQL row and cluster members into a Cluster | ||
impl TryFrom<(&Row<'_>, Vec<ClusterMember>)> for Cluster { | ||
type Error = rusqlite::Error; | ||
|
||
fn try_from( | ||
(row, cluster_members): (&Row<'_>, Vec<ClusterMember>), | ||
) -> Result<Self, Self::Error> { | ||
// Get ClusterId from column 0 | ||
let cluster_id = ClusterId(row.get(0)?); | ||
|
||
// Get the owner from column 1 | ||
let owner_str = row.get::<_, String>(1)?; | ||
let owner = Address::from_str(&owner_str).map_err(|e| from_sql_error(1, Type::Text, e))?; | ||
|
||
// Get the fee_recipient from column 2 | ||
let fee_recipient_str = row.get::<_, String>(2)?; | ||
let fee_recipient = | ||
Address::from_str(&fee_recipient_str).map_err(|e| from_sql_error(2, Type::Text, e))?; | ||
|
||
// Get faulty count from column 3 | ||
let faulty: u64 = row.get(3)?; | ||
|
||
// Get liquidated status from column 4 | ||
let liquidated: bool = row.get(4)?; | ||
|
||
Ok(Cluster { | ||
cluster_id, | ||
owner, | ||
fee_recipient, | ||
faulty, | ||
liquidated, | ||
cluster_members: cluster_members | ||
.into_iter() | ||
.map(|member| member.operator_id) | ||
.collect(), | ||
}) | ||
} | ||
} | ||
|
||
// Conversion from SQL row to a ClusterMember | ||
impl TryFrom<&Row<'_>> for ClusterMember { | ||
type Error = rusqlite::Error; | ||
|
||
fn try_from(row: &Row) -> Result<Self, Self::Error> { | ||
// Get ClusterId from column 0 | ||
let cluster_id = ClusterId(row.get(0)?); | ||
|
||
// Get OperatorId from column 1 | ||
let operator_id = OperatorId(row.get(1)?); | ||
|
||
Ok(ClusterMember { | ||
operator_id, | ||
cluster_id, | ||
}) | ||
} | ||
} | ||
|
||
// Conversion from SQL row to ValidatorMetadata | ||
impl TryFrom<&Row<'_>> for ValidatorMetadata { | ||
type Error = SqlError; | ||
fn try_from(row: &Row) -> Result<Self, Self::Error> { | ||
// Get public key from column 0 | ||
let validator_pubkey_str = row.get::<_, String>(0)?; | ||
let public_key = PublicKey::from_str(&validator_pubkey_str) | ||
.map_err(|e| from_sql_error(1, Type::Text, Error::new(ErrorKind::InvalidInput, e)))?; | ||
|
||
// Get ClusterId from column 1 | ||
let cluster_id: ClusterId = ClusterId(row.get(1)?); | ||
|
||
// Get ValidatorIndex from column 2 | ||
let index: ValidatorIndex = ValidatorIndex(row.get(2)?); | ||
|
||
// Get Graffiti from column 3 | ||
let graffiti = Graffiti(row.get::<_, [u8; GRAFFITI_BYTES_LEN]>(3)?); | ||
|
||
Ok(ValidatorMetadata { | ||
public_key, | ||
cluster_id, | ||
index, | ||
graffiti, | ||
}) | ||
} | ||
} | ||
|
||
// Conversion from SQL row into a Share | ||
impl TryFrom<&Row<'_>> for Share { | ||
type Error = rusqlite::Error; | ||
fn try_from(row: &Row) -> Result<Self, Self::Error> { | ||
// Get Share PublicKey from column 0 | ||
let share_pubkey_str = row.get::<_, String>(0)?; | ||
let share_pubkey = PublicKey::from_str(&share_pubkey_str) | ||
.map_err(|e| from_sql_error(0, Type::Text, Error::new(ErrorKind::InvalidInput, e)))?; | ||
|
||
// Get the encrypted private key from column 1 | ||
let encrypted_private_key: [u8; 256] = row.get(1)?; | ||
|
||
// Get the OperatorId from column 2 and ClusterId from column 3 | ||
let operator_id = OperatorId(row.get(2)?); | ||
let cluster_id = ClusterId(row.get(3)?); | ||
|
||
// Get the Validator PublicKey from column 4 | ||
let validator_pubkey_str = row.get::<_, String>(4)?; | ||
let validator_pubkey = PublicKey::from_str(&validator_pubkey_str) | ||
.map_err(|e| from_sql_error(4, Type::Text, Error::new(ErrorKind::InvalidInput, e)))?; | ||
|
||
Ok(Share { | ||
validator_pubkey, | ||
operator_id, | ||
cluster_id, | ||
share_pubkey, | ||
encrypted_private_key, | ||
}) | ||
} | ||
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
[package] | ||
name = "database" | ||
version = "0.1.0" | ||
edition = { workspace = true } | ||
authors = ["Sigma Prime <[email protected]>"] | ||
|
||
[dependencies] | ||
base64 = { workspace = true } | ||
dashmap = { workspace = true } | ||
openssl = { workspace = true } | ||
parking_lot = { workspace = true } | ||
r2d2 = "0.8.10" | ||
r2d2_sqlite = "0.21.0" | ||
rusqlite = { workspace = true } | ||
ssv_types = { workspace = true } | ||
types = { workspace = true } | ||
|
||
[dev-dependencies] | ||
rand = "0.8.5" | ||
tempfile = "3.14.0" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
# Anchor Database | ||
|
||
The Anchor Database provides a robust persistent and in-memory caching layer for the Anchor project, specifically designed to handle SSV Network data efficiently. This crate manages both persistent storage of blockchain event data and high-performance in-memory access patterns. | ||
|
||
## Table of Contents | ||
|
||
1. [Overview](#overview) | ||
2. [Core Features](#core) | ||
3. [Architecture](#Architecture) | ||
4. [Data Models](#Data) | ||
|
||
## Overview | ||
|
||
The Anchor Database serves as the backbone for storing and accessing SSV Network event data. When an Anchor node starts up, it needs to process and store blockchain event logs to maintain state. | ||
|
||
## Core Features | ||
* **Persistent Storage**: SQLite-based store with automatic schema management | ||
* **In-Memory Caching**: Efficient caching of frequently accessed data | ||
* **Multi-Index Access**: Flexible data access patters through multiple different keys | ||
* **Automatic State Recovery**: Rebuilds in-memory state from persistent storage on startup. | ||
* **Thread Safety**: Concurrent access support through DashMap implementations | ||
|
||
|
||
## Architecture | ||
The database architecture consists of a two key layers | ||
|
||
### Storage Layer | ||
|
||
At the foundation lies a SQLite database that provides persistent storage. This layer encompasses | ||
* **Database Connection Management**: A conneciton pool that maintains and resuses SQLite connections efficiently, preventing resource exhaustion while ensuring consistent access | ||
* **Schema and Transaction Management**: Automatic table creation and transaction support for data integrity | ||
|
||
|
||
### Cache Layer | ||
The in-memory cache layer combines high-performance caching with sophisticated indexing through a unified system. Is is broken up into Single-State and Multi-State. | ||
|
||
* **Single State**: Single state handles straightforward, one-to-one relationships where data only needs one access pattern. This is ideal for data that is frequenlty access but has simple relationships. | ||
* **Multi State**: Multi State handles complex relationships where the same data needs to be accessed through different keys. This is implemented through a series of MultiIndexMaps, each supporting three different access patterns for the same data. The type system enforces correct usage through the UniqueTag and NonUniqueTag markers, preventing incorrect access patterns at compile time. Each MultiIndexMap in the Multi State provides three ways to access its data: | ||
1) A primary key that uniquely identifies each piece of data | ||
2) A secondary key that can either uniquely identify data or map to multiple items | ||
3) A tertiary key that can also be unique or map to multiple items | ||
|
||
## Data Models | ||
The database handles several core data types | ||
|
||
**Operator** | ||
* Represents a network operator | ||
* Identified by OperatorId | ||
* Constains RSA public key and owner address | ||
|
||
**Cluster** | ||
* Represents a group of Operators managing validators | ||
* Contains cluster membership information | ||
* Tracks operational status and fault counts | ||
|
||
**Validator** | ||
* Contains validator metadata | ||
* Links to cluster membership | ||
* Stores configuration data | ||
|
||
**Share** | ||
* Represents cryptographic shares for validators | ||
* Links operators to validators | ||
* Contains encrypted key data |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a Committee?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think committee is too broad of a term here. A committee just refers to a general set of operators while here the
cluster_members
are meant to be the direct members of the cluster. Its trying to enforce the relationship. But, there is also an argument that committee is also applicable.