English | 中文
Multi-chain one-stop service platform, through API, can quickly connect to various on-chain services, such as NFT, Defi, transfer, query, etc. Allow all developers to use on-chain services without deploying nodes or knowing professional knowledge, including the following core services:
- blockchain: directly access the public chain, select the optimal node, and provide http and ws type protocols to interact with it
- collect: the specific executor of the task, to filter, verify and analyze the data on the acquisition chain
- task: Generate block tasks according to the latest height of the public chain
- tasksapi: receive user-defined tasks
- store: Receive monitoring addresses submitted by users, receive user subscriptions, actively push eligible transactions, and place data on the market
- You can get the transactions you care about, and you can also get the original transactions of the public chain
- Configure multiple nodes for each public chain, intelligently select the best node
- Provide http, ws and other protocols to adapt to more scenarios
- Components can be used independently or in combination
- Functions are configurable and can be configured freely according to business needs
- Support various public chains
- Data traceable and replayable
- Possess exception monitoring and task retry capabilities
- Visualization of task execution process
- Submit custom tasks
- Set transaction filter conditions
- user subscription
- Active push
- Support subscription for various business scenarios: asset transaction, token transfer, pledge, activation, etc.
- Historical data backup
blockchain | chaincode | desc |
---|---|---|
ethereum | 200 | ETH |
ethereum.Goerli | 20001 | ETH |
tron | 205 | TRX |
tron.shasta | 2051 | TRX |
polygon-pos | 201 | POLYGON |
arbitrum | 42161 | ARB |
optimism | 42162 | OP |
base | 8453 | BASE |
avalanche-C | 43114 | AVAX |
bitcoin | 300 | BTC |
filecoin | 301 | FIL |
ripple | 310 | XRP |
bsc | 202 | BNB |
bsc.testnet | 2021 | BNB |
tips:
chaincode can set by clients
- CPU With 4+ Cores
- 16GB+ RAM
- 200GB of free storage,Recommended that high performance SSD with at least 512GB free space
- go version>=1.20 install
- git : Install the latest version of git if it is not already installed. install
- cURL :Install the latest version of cURL if it is not already installed. install
- docker : Install the latest version of Docker if it is not already installed. install
- docker-compose: Install the latest version of Docker-compose if it is not already installed. install
mkdir supernode && cd supernode
git clone https://github.com/sunjiangjun/supernode.git
cd supernode
- ./build/scripts : Database initialization script, mainly modify the database name (database) default: ether2
- ./build/config : The configuration file required for system startup, the following files and fields often need to be modified
- blockchain_config.json, collect_config.json: mainly modify the node address of each public chain
- store_config.json : DbName field
- task_config.json : BlockMin, BlockMax and other fields
- Initialize the environment for supernode operation
docker-compose -f docker-compose-single-base.yml up -d
Quick deployment, execute the following command
docker-compose -f docker-compose-single-base-app.yml up -d
Shortcut mode deployment, step 5 can be skipped, but it is often applicable to simple business requirements
- Other commands that may be used
#view
docker-compose -f docker-compose-single-base.yml ps
#del
docker-compose -f docker-compose-single-base.yml down
docker-compose -f docker-compose-single-base.yml down -v
#rebuild
docker-compose -f docker-compose-single-base-app.yml build supernode
notes:
- The default of clickhouse: account: test, password: test
In order to improve the scope of application of supernode, we adopt a componentized design idea, so we provide the following three operation and deployment methods for supernode.
-
Download configuration files Download
-
Download the installation package Download
-
Add hosts item
192.168.2.9 easykafka
Among them, 192.168.2.9: Kafka server IP, replace it with your own server IP, easykafka: Kafka service name needs to be consistent with the environment script, and the general situation remains unchanged
-
Run supernode
./supernode -collect ./config/collect_config.json -task ./config/task_config.json -blockchain ./config/blockchain_config.json -taskapi ./config/taskapi_config.json -store ./config/store_config.json
- Build image
#create image
docker build -f Dockerfile -t supernode:1.0 .
#scan image
docker images |grep supernode
- Run supernode
docker run --name supernode -p 9001:9001 -p 9002:9002 -p 9003:9003 --network easynode_easynode_net -v /root/easy_node/supernode/config/:/app/config/ -v /root/app/log/:/app/log/ -v /root/app/data:/app/data/ -d supernode:1.0
notes:
-
network easynode_net : need to be consistent with docker-compose-single-base.yml
-
-v file mount: the path of the container is immutable, and the path of the host is changed to the absolute path available on the machine
-
The directory structure of ./config is as follows, the specific configuration of each configuration file see and the file name is immutable
./config
./blockchaiin_config.json
./collect_config.json
./store_config.json
./task_config.json
./taskapi_config.json
- If step 4 executes docker-compose-single-base-app.yml, you can skip step 5
-
Networks setting
Determine the network name in step 4, use the following command to view, and modify the networks.default.name field in docker-compose-cluster-supernode.yml
docker network ls|grep easynode_net
-
Manager service
According to the needs of specific scenarios, add or delete related services learn
-
Run cluster
docker-compose -f docker-compose-cluster-supernode.yml up -d
The use and function of each service, details
- Check whether the operating environment is started
docker-compose -f docker-compose-single-base.yml ps
- Check Kafka data
# review Kafka
docker exec -it 25032fc8414e kafka-topics.sh --list --bootstrap-server easykafka:9092
#review kakfa data
docker exec -it 25032fc8414e kafka-console-consumer.sh --group g1 --topic ether_tx --bootstrap-server easykafka:9092
- Check supernode
#review app container
docker ps |grep supernode
# review app log
docker logs -n 10 24a81a2a8e89
notes:
- Port 9001 of supernode Instructions
- Port 9002 of supernode Instructions
- Port 9003 of supernode Instructions
- Add monitoring address
# create token
curl --location --request POST 'http://localhost:9003/api/store/monitor/token' \
--header 'Content-Type: application/json' \
--data-raw '{
"email": "[email protected]"
}'
#submit monitoring address
curl --location --request POST 'http://localhost:9003/api/store/monitor/address' \
--header 'Content-Type: application/json' \
--data-raw '{
"blockChain": 200,//Not required, if not passed the default 0, it means cross-chain monitoring
"address": "0x28c6c06298d514db089934071355e5743bf21d61",
"token": "5fe5f231-7051-4caf-9b52-108db92edbb4"
}'
#submit subscription rules
curl --location --request POST 'localhost:9003/api/store/filter/new' \
--header 'User-Agent: apifox/1.0.0 (https://www.apifox.cn)' \
--header 'Content-Type: application/json' \
--data-raw '[
{
"token": "afba013c-0072-4592-b8cd-304fa456f76e",
"blockChain": 205,
"txCode": "1",
"params": ""
}
]'
- Submit subscription and accept return
url: ws://localhost:9003/api/store/ws/{token}
receive:
{
"code": 1, //message type
"blockChain": 200, //chain code
"data": {
"id":1698395758827420000,
"chainCode":200,
"blockHash":"0xbe36cdcfce377f7415bd91be3be10555fc705cd9c48ac077b3de9a1c298c4a36",
"blockNumber":"18117360",
"txs":[
{
"contractAddress":"0xd9ec62e6927082ad28b73fb5d4b5e9d571e00768",
"from":"0x0000000000000000000000000000000000000000",
"method":"Transfer",
"to":"0x2c2ab61d2506308c0017f26c36e81e5b22942d57",
"value":"1315",
"tokenType":721,
"index":9
}
],
"fee":"0.00182692522485181",
"from":"0x2c2ab61d2506308c0017f26c36e81e5b22942d57",
"hash":"0x2b7b684d469c365e0f8d9e2bf94bee672878aff4604b7715a48a7f37432f1a21",
"status":1,
"to":"0xd9ec62e6927082ad28b73fb5d4b5e9d571e00768",
"txTime":"1684390019000"
}
}
tips:
base code fork from https://github.com/sunjiangjun/supernode,it was also developed by me