From aec46d8c3c0bc5f7de18d89e317bc5769f24845b Mon Sep 17 00:00:00 2001 From: ChengenH Date: Fri, 24 May 2024 11:07:32 +0800 Subject: [PATCH] docs: remove repetitive words --- applications/CoinFabrik_On_Ink_Integration_Tests_2.md | 2 +- applications/CoinFabrik_On_Ink_Integration_Tests_3.md | 2 +- applications/Dante_Network.md | 4 ++-- ...NFT_Bridge_Protocol_for_NFT_Migration_and_Data_Exchange.md | 4 ++-- applications/RainbowDAO Protocol ink Phase 1.md | 2 +- applications/Subsembly-GRANDPA.md | 2 +- applications/Syncra.md | 2 +- applications/Tellor.md | 2 +- applications/blockchainia.md | 2 +- applications/chainviz.md | 2 +- applications/cryptex.md | 2 +- applications/dot-login.md | 2 +- applications/dotnix.md | 2 +- applications/fair_squares.md | 2 +- applications/helixstreet.md | 2 +- applications/pallet-drand-client.md | 2 +- applications/pallet_supersig.md | 2 +- applications/perun_channels.md | 2 +- applications/polk-auction.md | 2 +- applications/polkamusic.md | 2 +- applications/project_bodhi.md | 2 +- applications/project_silentdata.md | 2 +- applications/skynet-substrate-integration.md | 2 +- docs/RFPs/appi.md | 2 +- 24 files changed, 26 insertions(+), 26 deletions(-) diff --git a/applications/CoinFabrik_On_Ink_Integration_Tests_2.md b/applications/CoinFabrik_On_Ink_Integration_Tests_2.md index 684228a91ea..f9f7a4aa52b 100644 --- a/applications/CoinFabrik_On_Ink_Integration_Tests_2.md +++ b/applications/CoinFabrik_On_Ink_Integration_Tests_2.md @@ -157,7 +157,7 @@ After finishing the Milestone 1: Execution and Further Analysis, we will submit | Number | Deliverable | Specification | | ----- | ----------- | ------------- | | 0a. | License | MIT -| 0b. | Documentation | We will write a comprehensive report that compares the functionalities of integration tests and E2E (End-to-End) tests. This report will focus on the the functions to be implemented in this milestone, corresponding to issues 3-invoke_contract_delegate(), 4-invoke_contract(), 6-set_code_hash(), 8-caller_is_origin(), 9-code_hash(), 10-own_code_hash().

Our report will also document the implementation of any missing functionalities, or correct implementation differences, for the 13 functions with issues 12 through 24. For this group, we will document any additional work that was required in order to ensure consistency between integration and e2e tests.

If applicable, we will suggest additional tests outside of the scope of this milestone. Particularly, for functions declared outside of the env_access.rs file, but that could be related to integration or e2e testing. +| 0b. | Documentation | We will write a comprehensive report that compares the functionalities of integration tests and E2E (End-to-End) tests. This report will focus on thefunctions to be implemented in this milestone, corresponding to issues 3-invoke_contract_delegate(), 4-invoke_contract(), 6-set_code_hash(), 8-caller_is_origin(), 9-code_hash(), 10-own_code_hash().

Our report will also document the implementation of any missing functionalities, or correct implementation differences, for the 13 functions with issues 12 through 24. For this group, we will document any additional work that was required in order to ensure consistency between integration and e2e tests.

If applicable, we will suggest additional tests outside of the scope of this milestone. Particularly, for functions declared outside of the env_access.rs file, but that could be related to integration or e2e testing. | 0c. | Testing and Testing Guide | The newly developed functionalities will be documented and tested following existing [contribution guidelines](https://github.com/paritytech/ink/blob/master/CONTRIBUTING.md). A testing guide will be included. | 0d. | Docker | Does not apply at this stage. | 0e. | Article | We will publish an updated report summary in our blog at https://blog.coinfabrik.com/. diff --git a/applications/CoinFabrik_On_Ink_Integration_Tests_3.md b/applications/CoinFabrik_On_Ink_Integration_Tests_3.md index f2c23693b53..a80c246d15e 100644 --- a/applications/CoinFabrik_On_Ink_Integration_Tests_3.md +++ b/applications/CoinFabrik_On_Ink_Integration_Tests_3.md @@ -135,7 +135,7 @@ To address this issue, we will submit an initial report to the ink! development | Number | Deliverable | Specification | | ----- | ----------- | ------------- | | 0a. | License | MIT -| 0b. | Documentation | We will write a comprehensive report that compares the functionalities of integration tests and E2E (End-to-End) tests. This report will focus on the the functions to be implemented in this milestone, corresponding to issues `3-invoke_contract_delegate()`, `4-invoke_contract()`, `6-set_code_hash()`, `8-caller_is_origin()`, `9-code_hash()`, `10-own_code_hash()`, and `17-balance()`.

In the first week of this milestone, we will contact the ink! development team to provide an initial report on `14-weight_to_fee()`, documenting our efforts to identify the source of its implementation issues and seeking collaboration to assess the feasibility of resolving them. We will document any progress and implementations related to `14-weight_to_fee()` in our final milestone report.

We will document any additional work that was required in order to ensure consistency between integration and e2e tests.

If applicable, we will suggest additional tests outside of the scope of this milestone. Particularly, for functions declared outside of the `env_access.rs` file, but that could be related to integration or e2e testing. +| 0b. | Documentation | We will write a comprehensive report that compares the functionalities of integration tests and E2E (End-to-End) tests. This report will focus on thefunctions to be implemented in this milestone, corresponding to issues `3-invoke_contract_delegate()`, `4-invoke_contract()`, `6-set_code_hash()`, `8-caller_is_origin()`, `9-code_hash()`, `10-own_code_hash()`, and `17-balance()`.

In the first week of this milestone, we will contact the ink! development team to provide an initial report on `14-weight_to_fee()`, documenting our efforts to identify the source of its implementation issues and seeking collaboration to assess the feasibility of resolving them. We will document any progress and implementations related to `14-weight_to_fee()` in our final milestone report.

We will document any additional work that was required in order to ensure consistency between integration and e2e tests.

If applicable, we will suggest additional tests outside of the scope of this milestone. Particularly, for functions declared outside of the `env_access.rs` file, but that could be related to integration or e2e testing. | 0c. | Testing and Testing Guide | The newly developed functionalities will be documented and tested following existing [contribution guidelines](https://github.com/paritytech/ink/blob/master/CONTRIBUTING.md). A testing guide will be included. | 0d. | Docker | Does not apply at this stage. | 0e. | Article | We will publish an updated report summary in our blog at https://blog.coinfabrik.com/. diff --git a/applications/Dante_Network.md b/applications/Dante_Network.md index 069e835dd7f..1c4678adeb1 100644 --- a/applications/Dante_Network.md +++ b/applications/Dante_Network.md @@ -173,7 +173,7 @@ We’ve published a tasting version of the dev SDK for multi-chain DApp develope | Number | Deliverable | Specification | | -----: | ----------- | ------------- | | 0a. | License | GPLv3 | -| 0b. | Documentation | We will provide both **inline documentation** of the code and a basic **tutorial** that explains how a user can use the the SDK of Dante smart contract developed in ink! to build their own Omnichain DApps. At this stage, the tutorial will cover how to make message communications and contract invocations between Polkadot’s smart contract parachains and other chains(like Ethereum).| +| 0b. | Documentation | We will provide both **inline documentation** of the code and a basic **tutorial** that explains how a user can use theSDK of Dante smart contract developed in ink! to build their own Omnichain DApps. At this stage, the tutorial will cover how to make message communications and contract invocations between Polkadot’s smart contract parachains and other chains(like Ethereum).| | 0c. | Testing Guide | Core functions will be fully covered by unit tests to ensure functionality and robustness. In the guide, we will describe how to run these tests. | | 0d. | Article | We will publish an **article** that explains what was done as part of the grant. And we will publish a series of articles that explains how Dante Protocol Stack works from a high-level perspective. The content of the articles will be consistent with the functions at this stage. | 1. | (ink!)smart contracts: Service expression layer | Development and testing of Service expression layer on some of Polkadot’s smart contract parachains (Astar/Edgeware); Demos for communication and interoperation between one of Polkadot’s smart contract platforms and Ethereum, Near, Avalanche.| @@ -192,7 +192,7 @@ We’ve published a tasting version of the dev SDK for multi-chain DApp develope | Number | Deliverable | Specification | | -----: | ----------- | ------------- | | 0a. | License | GPLv3 | -| 0b. | Documentation | We will provide both **inline documentation** of the code and a basic **tutorial** that explains how a user can use the the SDK of Dante smart contract developed in ink! to build their own Omnichain DApps. At this stage, the tutorial will cover how to use SQoS to balance security and scalability when making multi-chain operations. | +| 0b. | Documentation | We will provide both **inline documentation** of the code and a basic **tutorial** that explains how a user can use theSDK of Dante smart contract developed in ink! to build their own Omnichain DApps. At this stage, the tutorial will cover how to use SQoS to balance security and scalability when making multi-chain operations. | | 0c. | Testing Guide | Core functions will be fully covered by unit tests to ensure functionality and robustness. In the guide, we will describe how to run these tests. | | 0d. | Docker | We will provide a Dockerfile(s) that can be used to test all the functionality delivered with this milestone. | | 0e. | Article | We will publish an **article** that explains what was done as part of the grant. And we will publish a series of articles that explains how Dante Protocol Stack works from a high-level perspective. The content of the articles will be consistent with the functions at this stage. diff --git a/applications/NFT_Bridge_Protocol_for_NFT_Migration_and_Data_Exchange.md b/applications/NFT_Bridge_Protocol_for_NFT_Migration_and_Data_Exchange.md index 5108a34a24d..b2c3c45c820 100644 --- a/applications/NFT_Bridge_Protocol_for_NFT_Migration_and_Data_Exchange.md +++ b/applications/NFT_Bridge_Protocol_for_NFT_Migration_and_Data_Exchange.md @@ -136,7 +136,7 @@ This process is NOT a fully decentralized trustless process itself in order to a - If after a time-limit, either of those acknowledgement are missing, the migration is reverted : the original token can be withdrawn freely by the sender, and the migrated token is burned. - Checked migrations need to be possible for either EVM => EVM, *=> EVM or EVM =>* migrations. - Checked migrations need to allow any third party to "check" the migration and publish a standardized signed message that the migration did indeed happen. -- NB : This only cover the migration of NFTs to a new universe, not the redemption of the the NFT back to it's origin universe. +- NB : This only cover the migration of NFTs to a new universe, not the redemption of theNFT back to it's origin universe. - Licensed under the [Unlicense](https://unlicense.org) *The main purpose of this migration process is for NFT publishers to allow their users to effortlessly migrate their tokens with the least amount of efforts required. NFT publishers could offer users to do the whole migration with a single gas spending approve() from an NFT owner and the rest trough meta-transactions by the publisher. The publisher would then sign the migration as properly done after having minted and transferred the token on the destination blockchain. By essence, most NFTs are not trustless assets as their publishers own real world IP rights to them, and it is hence acceptable to use said publishers as relayers. This is standardizing a process that would otherwise require the publisher to update their original NFT smart contracts or NFT owners to burn their original NFT token in order to get a new one minted on the destination universe.* @@ -155,7 +155,7 @@ We will write up the ‘Trustless Migration’ process which is designed to be u - Snowfork is already building a substrate module allowing specifically for Ethereum Smart contract reading. If a Substrate-built parachain implement those reading capacities, then implementation of this process should be straightforward. - In the case of EVM => EVM ERC-721 migration without trustless reading, Chainbridge already exist. However, their contracts requires administrator input for new contract registration as well as lacking features that are NFT specific, such as preventing minting technically correct but legally counterfeit tokens. -- NB : This only cover the migration of NFTs to a new universe, not the redemption of the the NFT back to it's origin universe. +- NB : This only cover the migration of NFTs to a new universe, not the redemption of theNFT back to it's origin universe. - Licensed under the [Unlicense](https://unlicense.org) ### Milestone 4 — Standard and Documentation for Cross-universe Migration diff --git a/applications/RainbowDAO Protocol ink Phase 1.md b/applications/RainbowDAO Protocol ink Phase 1.md index b0561630cec..ac2f898079b 100644 --- a/applications/RainbowDAO Protocol ink Phase 1.md +++ b/applications/RainbowDAO Protocol ink Phase 1.md @@ -284,7 +284,7 @@ The following are the details of the : RainbowDao protocol: - ##### 3.DAO User Management System - It’s about the the member management of the entire RainbowDAO protocol. This management pattern can be employed in all independent DAOs, they can choose one or more parts of the system and they can combine these parts for their own use. Currently, there’re merely some simple functions there. We will develop more sophisticated ones based on the reality of RainbowDAO protocol. + It’s about themember management of the entire RainbowDAO protocol. This management pattern can be employed in all independent DAOs, they can choose one or more parts of the system and they can combine these parts for their own use. Currently, there’re merely some simple functions there. We will develop more sophisticated ones based on the reality of RainbowDAO protocol. - ##### 4.DCV Management System diff --git a/applications/Subsembly-GRANDPA.md b/applications/Subsembly-GRANDPA.md index e7c2bd26ae3..5e089252ba2 100644 --- a/applications/Subsembly-GRANDPA.md +++ b/applications/Subsembly-GRANDPA.md @@ -112,7 +112,7 @@ The team already spend time on implementing GRANDPA, by updating the Subsembly C ## Development Roadmap :nut_and_bolt: -Described below is a practical approach to the implementation of the GRANDPA module along with the the other auxiliary modules that are required. +Described below is a practical approach to the implementation of the GRANDPA module along with theother auxiliary modules that are required. 1. Session 1. Implement session period configuration (`n` number of blocks) diff --git a/applications/Syncra.md b/applications/Syncra.md index 3764804ae1f..551191357bd 100644 --- a/applications/Syncra.md +++ b/applications/Syncra.md @@ -79,7 +79,7 @@ Known drawbacks are the security concerns, related with storing private keys on ### Data Model -Syncra uses IPFS as well as MongoDB for storing additional data about DAOs, proposals, and user stats. The purpose is to minimise the data footprint on the blockchain itself, as storing data onchain is costly, and not very performant. Only the critical data is stored inside of the the DAO Smart Contract’s. +Syncra uses IPFS as well as MongoDB for storing additional data about DAOs, proposals, and user stats. The purpose is to minimise the data footprint on the blockchain itself, as storing data onchain is costly, and not very performant. Only the critical data is stored inside of theDAO Smart Contract’s. DAOs, Proposals titles, and descriptions are stored on the IPFS, and then corresponding IPFS hashes are set on the DAO contract's storage. In this way, users can be sure that the data about the given DAO or Proposal won’t be modified, nor fade-away if the server ever goes down. The same applies to storing images, as we use web3 storage for image upload. diff --git a/applications/Tellor.md b/applications/Tellor.md index b21734eb7db..abd618ca734 100644 --- a/applications/Tellor.md +++ b/applications/Tellor.md @@ -202,7 +202,7 @@ Details: A new Substrate pallet will be required which includes the core oracle | **0c.** | Testing and Testing Guide | Core functions will be fully covered by comprehensive unit tests to ensure functionality and robustness. In the guide, we will describe how to run these tests. | | **0d.** | Docker | We will provide a Dockerfile(s) that can be used to test all the functionality delivered with this milestone. | | 1 | Substrate Oracle pallet design and integration | We will provide the Substrate oracle pallet | -| 2 | Tests and a guide for testing functionallity of the the pallet with integration of a mock project on selected parachains| We will provide tests and a guide to test cross functionality of the system for interactions between the EVM chain and consumer chain and oracle pallet (meaning test the functinallity between milestone 1 and 2 delivarable 1 - solidity contracts, pallet, XCM)| +| 2 | Tests and a guide for testing functionallity of thepallet with integration of a mock project on selected parachains| We will provide tests and a guide to test cross functionality of the system for interactions between the EVM chain and consumer chain and oracle pallet (meaning test the functinallity between milestone 1 and 2 delivarable 1 - solidity contracts, pallet, XCM)| | 3 | Documentation/ Usage Examples| We will provide documenatation and usage examples for the system. | diff --git a/applications/blockchainia.md b/applications/blockchainia.md index adfb60ee1b6..3836d5875be 100644 --- a/applications/blockchainia.md +++ b/applications/blockchainia.md @@ -134,7 +134,7 @@ We have begun development on a server authoritative online multiplayer game engi | **0b.** | Documentation | We will provide both **inline documentation** of the code and a basic **tutorial** that explains how a user can (for example) spin up one of our Substrate nodes and send test transactions, which will show how the new functionality works. This project also includes creating documentation to will allow our community to test and involve themselves in our multiplayer gaming infrastructure. We hope to receive and respond to feedback on this community documentation in Milestone 2.| | **0c.** | Testing and Testing Guide | Core functions will be fully covered by comprehensive unit tests to ensure functionality and robustness. Unit tests will be written for relevant game server functionality. Unit tests will be written for the DeathToll pallet. In each guide (in each project's Github repo), we will describe how to run these tests. Manual integration testing will be through in-game functionality (i.e, Given an enemy is eliminated on server, the proper state functions are executed on chain, and the leader board updates through the in-game chain browser) | | **0d.** | Docker | We will provide Dockerfiles that can be used to run and test all the functionality delivered with this milestone. This will include a deployable "Full Node" that encases both a game server and substrate node. These will interact with our game client and substrate parachain respectively. A successful MVP will allow a game client to register events on the game server (Server-Authoritative model), which is then written to the parachain ledger. Upon completion of the initial game "level", an "accolade" NFT will be deposited via ink! smart contract to the wallet that completed the required in-game tasks. | -| 1. | Substrate module: Game Engine Events | The Game Events Substrate module will contain functionality for asynchronously administering the the match between the game server and blockchain node. It is likely that we will break this into multiple pallets, one for core gaming functionality, and one specific to the needs of the DeathToll game-type. The initial pallet features include functionality to create, administer, and write the outcome of a match to the chain, updating the wallets of all players with their earned experience (which is key to the economy of Blockchainia). This module will work on top of Ajuna's service layer to aggregate and marshall game state information relevant to tracked leader board statistics and write to chain. These events will include in game events like player eliminations, deaths, attacks attempted/missed, wins, and other in-game achievements in our continuously expanding list of features. Eventually, this list of configurable features will allow a server owner to run game types similar to those seen in other first-person shooters, like death match, hostage rescue, and capture the flag. The configurable nature of our servers will allow our community to self-explore and find a region of our community that suits their personal play style and temperament. +| 1. | Substrate module: Game Engine Events | The Game Events Substrate module will contain functionality for asynchronously administering thematch between the game server and blockchain node. It is likely that we will break this into multiple pallets, one for core gaming functionality, and one specific to the needs of the DeathToll game-type. The initial pallet features include functionality to create, administer, and write the outcome of a match to the chain, updating the wallets of all players with their earned experience (which is key to the economy of Blockchainia). This module will work on top of Ajuna's service layer to aggregate and marshall game state information relevant to tracked leader board statistics and write to chain. These events will include in game events like player eliminations, deaths, attacks attempted/missed, wins, and other in-game achievements in our continuously expanding list of features. Eventually, this list of configurable features will allow a server owner to run game types similar to those seen in other first-person shooters, like death match, hostage rescue, and capture the flag. The configurable nature of our servers will allow our community to self-explore and find a region of our community that suits their personal play style and temperament. | 2. | Unity Game Engine and Configurable Server/Client | We will embed our substrate based chain interactions into our game engine. While the game client and server communicate to drive game play, the game server will also publish certain events to the game chain via Ajuna's Service layer. Each of these processes will initially be deployable via docker container. The game server architecture will operate from a community driven DAO implemented with existing society and membership pallets available in the Substrate store. The value in the deliverable for the game/server lies in its online multiplayer architecture. We will use a server authoritative model embedded with our web3 backend via the substrate modules created in deliverable 1. We will layer these on top and alongside services provided by Ajuna to interact with our game chain. To start a match, the server must request that a "match" be created on chain via ink! smart contract after collecting a nominal fee from connected players. The server will also request the information necessary to spawn environment enemies in the game map from owned NFTs on chain at random. Lastly, players will spawn and game play will begin. In game events, such as when a player eliminates another player or environment enemy, will be written to chain. These streams of game commands which make up the packets sent by the client to the server are used to compensate for latency with methods like prediction and interpolation. Our engine will be similar to that used in Quake (id Software, Microsoft) and Half-Life (Valve, Sierra Studios), using various methods to compensate for latency and ensure a pleasant user experience, the key difference being that interactions leading to specific events will be logged to the chain. With this advancement in technology, Blockchainia will redefine how streamers and creators interact with their followers, and re-imagine how games can interact with a player. Gamings greatest moments will be immortalized on chain, allowing players the chance to engrave their accomplishments on web3's Stanley Cup, our eternal sliding time window leader board. Milestone 1 begins with a simple MVP to onboard users and test our engine before expanding on our functionality and appealing to a broader market. We will receive community feedback on how to fairly and equitably handle situations including server crashes and moderation of hacking and toxicity. This will lead to a code of conduct in deliverable 2 as we expand our features to include a map and item builder, as well as a strong in-game economy. We will also expand on a configurable set of features that affords future web3 game developers to configure our engine for their own games and expand on its features. | 3. | Unity Assets | Besides the NFT playable characters released in our games collections, all assets created by our team will be released free to use by other developers who would like to use our ecosystem to create their own games and mods. These will include wall and floor textures, doors, environment enemies, and other sprites used throughout our games. diff --git a/applications/chainviz.md b/applications/chainviz.md index 2cee28506e6..53fcb7fea7b 100644 --- a/applications/chainviz.md +++ b/applications/chainviz.md @@ -29,7 +29,7 @@ Application in its current alpha version provides the following features: **This application is to fund the building of the first major version of Chainviz**, with the following features/visualizations: -- Complete rebuild of the the existing functionality with improved UI/UX and WebGL models and animations +- Complete rebuild of theexisting functionality with improved UI/UX and WebGL models and animations - Additional support for Polkadot - New visualizations - Parachains diff --git a/applications/cryptex.md b/applications/cryptex.md index 1e1da2fe8e4..d6e8f6638d0 100644 --- a/applications/cryptex.md +++ b/applications/cryptex.md @@ -64,7 +64,7 @@ Our implementation makes use of the [SessionManager](https://paritytech.github.i Before a new session starts and after the initial session, a semi trusted node, the *coordinator*, is responsible for facilitating the threshold secret sharing. It does this by generating a secret polynomial and secret shares, encrypting them, and distributing them to upcoming session validators prior to the session start. To make things even simpler, the coordinator will also derive public keys for each of the authorities based on the secret shares, and publish the public keys along with the encrypted secrets. - The coordinator also chooses system parameters for the IBC, a generator $P \in \mathbb{G}_1$, then calculates $P_{pub} = sP$, where $s$ is the secret created the the TSS scheme. Then publish $(P, P_{pub}, (k_1, Q_1), ..., (k_n, Q_n))$ on-chain, where $k_1, ..., k_n$ and $Q_i$ are the public keys. are the encrypted secrets. This will happen at the end of a session during the `new-session` function. + The coordinator also chooses system parameters for the IBC, a generator $P \in \mathbb{G}_1$, then calculates $P_{pub} = sP$, where $s$ is the secret created theTSS scheme. Then publish $(P, P_{pub}, (k_1, Q_1), ..., (k_n, Q_n))$ on-chain, where $k_1, ..., k_n$ and $Q_i$ are the public keys. are the encrypted secrets. This will happen at the end of a session during the `new-session` function. **Keygen and Identity** Each slot in any given epoch has a unique role associated with it, which is calculated from the slot schedule. For any given address, epoch, and slot number, we calculate a unique role by hashing the address, epoch, and slot number. Later on when encrypting, we will use this value to verify signatures. That is, the public key is $\hat{Q}_i = H(ID_i = (A_i || e_k || sl_r))$. diff --git a/applications/dot-login.md b/applications/dot-login.md index 81d9144328b..4f9f814b258 100644 --- a/applications/dot-login.md +++ b/applications/dot-login.md @@ -204,7 +204,7 @@ The planned milestones include: | 0e. | Article | Article that covers the implementation of the two modules, how to use them, how this development is significant for the ecosystem and mainstream adoption as well as our long-term vision for this project. | | 1. | Transaction Signature Verification Mechanism | Develop a mechanism in `zkEphemeralKeys` to verify the signatures of transactions against the registered ephemeral keys. A `SIGNATURE_VERIFIED` (or similar) event will be emitted upon successful verification. | | 2. | Implement `execute_transfer` Extrinsic | Develop the `execute_transfer` extrinsic within the `zkEphemeralKeys`` pallet. It will accept all necessary parameters for a transfer, including an ephemeral key signature. | -| 3. | `zkEphemeralKeys`-internal Transfer Functionality | Develop an internal function within the `zkEphemeralKeys` pallet to handle the actual token transfer. This function will replicate the essential checks and logic of the balances pallet’s transfer mechanism and has to be updated, if the the balances pallet changes. While this dependency is not perfect, we think that's the best trade-off, because the alternative would be to change the balances pallet which is something we'd like to avoid. We might propose a change on the balances pallet at a later stage, to make this more flexible. Note that this deliverable will also include the handling and emitting of events to broadcast the success or failure of the transfer. | +| 3. | `zkEphemeralKeys`-internal Transfer Functionality | Develop an internal function within the `zkEphemeralKeys` pallet to handle the actual token transfer. This function will replicate the essential checks and logic of the balances pallet’s transfer mechanism and has to be updated, if thebalances pallet changes. While this dependency is not perfect, we think that's the best trade-off, because the alternative would be to change the balances pallet which is something we'd like to avoid. We might propose a change on the balances pallet at a later stage, to make this more flexible. Note that this deliverable will also include the handling and emitting of events to broadcast the success or failure of the transfer. | #### Milestone 3 - Wallet (Extension) diff --git a/applications/dotnix.md b/applications/dotnix.md index 6903620c6d0..c602db93068 100644 --- a/applications/dotnix.md +++ b/applications/dotnix.md @@ -170,7 +170,7 @@ We have started designing the architecture and the interfaces between the differ | **0e.** | Article | We will publish an article explaining why we believe this project is a step in the right direction and what benefits we hope to bring to the polkadot and general staking services. | | 1. | Package Polkadot binary | We will create a polkadot `nix flake` that packages the Polkadot validator. We will likely use [`polkadot.nix`](https://github.com/andresilva/polkadot.nix) since the work has already been done there and the License is MIT. We plan to contribute back to `polkadot.nix`. any changes we might need. | | 2. | NixOS Validator Module | We will create a NixOS module (similar to ansible playbook(s)`) that allows compatible CPU architecture (`linux-x86_64`, `linux-aarch64) NixOS system virtual machine to run the packaged `polkadot` binary as `systemd unit(s)`. | -| 3. | Secret Management | The previous `systemd unit` will be granted access to the validator secrets with the combined usage of [`systemd-vault`](https://github.com/numtide/systemd-vaultd) and Hashicorp's [`vault`](https://www.vaultproject.io/). `systemd units` have the the option to use the `LoadCredential=` to provide access to secrets from `vault` to the `systemd unit`.
Additonal Details:
- configure vault using [integrated storage](https://developer.hashicorp.com/vault/docs/internals/integrated-storage)
- configure `vault-agent` to integrate with `systemd-vault` i.e. by writing secrets to `/run/systemd-vaultd/secrets/$service_name.service.json`
- configure `systemd-vaultd`
- configure the validator to load credentials from systemd-vault
- restart the validator whenever secrets change | +| 3. | Secret Management | The previous `systemd unit` will be granted access to the validator secrets with the combined usage of [`systemd-vault`](https://github.com/numtide/systemd-vaultd) and Hashicorp's [`vault`](https://www.vaultproject.io/). `systemd units` have theoption to use the `LoadCredential=` to provide access to secrets from `vault` to the `systemd unit`.
Additonal Details:
- configure vault using [integrated storage](https://developer.hashicorp.com/vault/docs/internals/integrated-storage)
- configure `vault-agent` to integrate with `systemd-vault` i.e. by writing secrets to `/run/systemd-vaultd/secrets/$service_name.service.json`
- configure `systemd-vaultd`
- configure the validator to load credentials from systemd-vault
- restart the validator whenever secrets change | | 4. | Tests for secret maangement | Services using the secret management will not start unless all required secrets have been provided (services will wait for the secrets). Removing required secrets will stop affected services (put them into a waiting state). Re-adding secrets will start waiting services. Modifying secrets will restart affected services | | 5. | Basic Security Hardening | Implement dynamic unprivileged user, restrict filesystem access, and other suggestions that `systemd-analyze security` command suggests. | | | | | diff --git a/applications/fair_squares.md b/applications/fair_squares.md index 78e3ab5b4a0..de703c9f74a 100644 --- a/applications/fair_squares.md +++ b/applications/fair_squares.md @@ -478,7 +478,7 @@ style B fill:#f9f,stroke:#333,stroke-width:4px | 0b. | Documentation | We will provide both **inline documentation** of the code and a basic **tutorial** that walks through how finalization works, how this role can be acquired and why it was created. The same applies to the representative. Both have a different function and can be called upon after specific situations. We will also walk through | 0c. | Testing Guide | Core functions will be fully covered by unit tests to ensure functionality and robustness. Also there will be integration tests covering the pallets and modules of milestone 1,2,3,4 . In the guide, we will describe how to run these tests. | | 0d. | Docker | We will provide a Dockerfile(s) that can be used to test all the functionality delivered with this milestone. | -| 0e. | Article | We will publish an **article** that explains the the usage of the functionality in additon to the previous milestones. The article will emphasize why finalization of the asset acquirement is required, why a representative is needed and what it's role is. How other previous stakeholders interact with the new functions and roles.| +| 0e. | Article | We will publish an **article** that explains theusage of the functionality in additon to the previous milestones. The article will emphasize why finalization of the asset acquirement is required, why a representative is needed and what it's role is. How other previous stakeholders interact with the new functions and roles.| | 1. | **pallet-finalizer** | Before the house title can be transfered to the fractional new owners when the sale of an asset is successful there needs to be checks done by the appointed notary. This is the authority, also in the current finalization of the title transfers. Notaries make sure the new buyers are aware of what they are buing and the notary makes sure no one else can write the asset on their name. In FS's case this swap is done by the blockchain, but the notary would give the green light. The finalization will be it's own pallet and functionality will be expanded in the future. The roles will be set in **pallet-roles**, which gives the notary and the land registry users rights to let the exchange pass. The transfer titles need to be proofs, the proof for now will be simplified random hashes, but only the notary role should be allowed to and sigantures by the notary roles | | 2. | Module: **representative** | When the sale of an asset is finalized, the new fractionalized owners are to be assigned a representative. The representative of the owners finds a tenant from the pools of tenants registered on-chain. The representatitive has to find the match based on region, total inhabitants and costs. The tenant will have to provide all this information. that will represent the house owners and find a tenant. | | 3. | pallet: **asset-management** | With the sale being finalized the new asset-owners can vote in a representative, vote over improvements, lay-down a representative if it doesn't perform or represent the best interest of the owners. This module is created in the **pallet-roles** and **pallet-voting** | diff --git a/applications/helixstreet.md b/applications/helixstreet.md index fc485941d79..1b4cdd26add 100644 --- a/applications/helixstreet.md +++ b/applications/helixstreet.md @@ -21,7 +21,7 @@ The project is in this stage just a pallet. ### Ecosystem Fit -The project extends the use of Polkadot ( or Kusama ) to a whole new use case. helixstreet would be an application specific parathread. It solves the problem of ownership of genomic data and the the human desire to conduct genealogical research without a central authority. +The project extends the use of Polkadot ( or Kusama ) to a whole new use case. helixstreet would be an application specific parathread. It solves the problem of ownership of genomic data and thehuman desire to conduct genealogical research without a central authority. ## Team :busts_in_silhouette: diff --git a/applications/pallet-drand-client.md b/applications/pallet-drand-client.md index 8f33c631b14..2fa89978107 100644 --- a/applications/pallet-drand-client.md +++ b/applications/pallet-drand-client.md @@ -31,7 +31,7 @@ In this project, we want to enable any Substrate project to consume publically, Randomness needs to be retrieved from an HTTP API via a provider, which is itself either a member of the drand network, or a broadcaster of the randomness. In either case, the pallet doesn't trust the provider blindly - instead, it can cryptographically verify the correctness of the randomness retrieved, by verifying it against the drand [chain randomness information](https://drand.love/developer/http-api/#chain-hash-info) contained in the Runtime. This chain intormation contains the network's well-known threshold public key, fixed interval for randomness generation, genesis time, and an initial random hash. -This chain randomness information and an optional [round](https://drand.love/developer/http-api/#chain-hash-public-round) 'checkpoint' can be set in the chain's `GenesisConfig`, allowing the network to immediately start using the randomness from the first block. If appropriate, the pallet can also contain a `UpdateOrigin` `Config` parameter, allowing the the beacon source to be modified by a trusted authority (eg. Council, Sudo, whitelisted account, etc) without a runtime update. Each round will be obtained via HTTP API calls made via an off-chain worker, and each round will be verified for cryptographic accuracy and timeliness before being consumable by the runtime. +This chain randomness information and an optional [round](https://drand.love/developer/http-api/#chain-hash-public-round) 'checkpoint' can be set in the chain's `GenesisConfig`, allowing the network to immediately start using the randomness from the first block. If appropriate, the pallet can also contain a `UpdateOrigin` `Config` parameter, allowing thebeacon source to be modified by a trusted authority (eg. Council, Sudo, whitelisted account, etc) without a runtime update. Each round will be obtained via HTTP API calls made via an off-chain worker, and each round will be verified for cryptographic accuracy and timeliness before being consumable by the runtime. ### Ecosystem Fit diff --git a/applications/pallet_supersig.md b/applications/pallet_supersig.md index 15e5d5c0d5a..f0d29d442cd 100644 --- a/applications/pallet_supersig.md +++ b/applications/pallet_supersig.md @@ -14,7 +14,7 @@ "A Supersig is a Multisig with superpowers" -A new pallet that improves on the the very-well-used multisig, but making it fit for functioning more like a larger fund, sub-treasury, DAO, or as we like to call it a DOrg. +A new pallet that improves on thevery-well-used multisig, but making it fit for functioning more like a larger fund, sub-treasury, DAO, or as we like to call it a DOrg. This is Decentration's first grant proposal to Web3. We view this simple, suitable and potentially pervasivaely used pallet as a great opportunity to develop an ongoing relationship with Web3 Foundation, given that we have shared and aligned interests. diff --git a/applications/perun_channels.md b/applications/perun_channels.md index a307f645c50..21bab832d12 100644 --- a/applications/perun_channels.md +++ b/applications/perun_channels.md @@ -213,4 +213,4 @@ We plan to provide the corresponding off-chain functionality written Go in the c After Dieter Fishbein joined the Web3 Foundation, he reached out to Sebastian Stammler in June 2020 regarding grants, finally resulting in this application. **Other project funding.** -The project is partially supported by the the German Ministry of Education and Science (BMBF) through a Startup Secure grant. +The project is partially supported by theGerman Ministry of Education and Science (BMBF) through a Startup Secure grant. diff --git a/applications/polk-auction.md b/applications/polk-auction.md index 1890341637b..66ce13b07f5 100644 --- a/applications/polk-auction.md +++ b/applications/polk-auction.md @@ -50,7 +50,7 @@ The crowdloan page display the ongoing crowdlending campaigns as a list. The use Current parachains page : ![](https://i.imgur.com/0bdQ0xH.jpg) -The parachains page display the running parachains of the selected chain as a list. This page gathers the on-chain details of the selected parachain with some extra information, such as a link the the official website, a link to the github repository of the blockchain, etc. +The parachains page display the running parachains of the selected chain as a list. This page gathers the on-chain details of the selected parachain with some extra information, such as a link theofficial website, a link to the github repository of the blockchain, etc. Details about the UI (that I cannot render on paper) : diff --git a/applications/polkamusic.md b/applications/polkamusic.md index 40428992d23..3089d8efc97 100644 --- a/applications/polkamusic.md +++ b/applications/polkamusic.md @@ -269,7 +269,7 @@ Advanced Mode -> [Link](https://github.com/polkamusic/PolkaMusic/raw/master/RMP% | 0a. | License | Apache 2.0 / MIT / Unlicense | | 0b. | Documentation | Documents explaining the structure of Royalty Splitter Pallet and the streaming platform | | 0c. | Testing Guide | We will provide a guide to test by streaming a song on the front end and have the royalty processed through the Royalty Splitter Pallet, and verify the result on the block explorer. | -| 1. | Royalty Splitter Pallet | `royaltySplitter(to:src_ipfs,amount:u256,tokenId:u256)` For every request, Royalty Splitter Pallet will retrieve the SRC data, and split the incoming currency to its constituent owners based on the the ownership weights. | +| 1. | Royalty Splitter Pallet | `royaltySplitter(to:src_ipfs,amount:u256,tokenId:u256)` For every request, Royalty Splitter Pallet will retrieve the SRC data, and split the incoming currency to its constituent owners based on theownership weights. | | 2. | Front-end | For every stream on the prototype frontend hosted on polkamusic.io, tokens are dispatched from the reward pool (a contract with $POLM tokens) to the Royalty Splitter, which will pay the artists as per the payment details in the SRC. | | 3. | Quorum Pallet Specification | A document outlining our mechanism to weed out the bad actors by introducing democratic trust scoring on submitted content | diff --git a/applications/project_bodhi.md b/applications/project_bodhi.md index 7e5520962f9..f4aeec2c222 100644 --- a/applications/project_bodhi.md +++ b/applications/project_bodhi.md @@ -205,7 +205,7 @@ Goal - Integrate with one existing Ethereum project ## Future Plans -Our vision is to provide a composable and innovative stack for EVM on Substrate. We've seen the power of composibility in DeFi on Ethereum, and it's not limited to one domain. Meanwhile we also want to break-free from Ethereum constraints, and offer innovative economic models, fight scams, and improve usability. We're determined to make this next level unified experience happen on Substrate, through the the Project Bodhi stack. We are going to eat our own dog food to use it for Acala. And we believe it will be useful for most domain-specific parachains/parathreads who have custom runtime and also want to leverage smart contracts. +Our vision is to provide a composable and innovative stack for EVM on Substrate. We've seen the power of composibility in DeFi on Ethereum, and it's not limited to one domain. Meanwhile we also want to break-free from Ethereum constraints, and offer innovative economic models, fight scams, and improve usability. We're determined to make this next level unified experience happen on Substrate, through theProject Bodhi stack. We are going to eat our own dog food to use it for Acala. And we believe it will be useful for most domain-specific parachains/parathreads who have custom runtime and also want to leverage smart contracts. Future development diff --git a/applications/project_silentdata.md b/applications/project_silentdata.md index 12b1b84df23..3de90a6e5e8 100644 --- a/applications/project_silentdata.md +++ b/applications/project_silentdata.md @@ -55,7 +55,7 @@ Silent Data includes a public facing web application and API that facilitate com The enclave decrypts the credentials and uses them to retrieve data from trusted data sources over HTTPS (initially Instagram). The enclave will then perform the preconfigured calculations and checks on the data in order to verify that the input query is true or false. The wallet signature and decrypted credentials are also used to prove that the owner of the wallet had access to those credentials and is most likely the owner of the data. -The enclave can optionally associate and attest to non-private data relating to the check (e.g. Instagram account name) by including it in the proof certificate data as key-value pairs. The proof certificate includes some standard information such as an identifier of the check performed, the wallet address of the user, a timestamp, and the identifier of the proof certificate on Silent Data, along with any extra non-private data. The enclave will then sign this certificate using an algorithm compatible with the target blockchain and send it to the the dApp smart contract for persistence and verification. +The enclave can optionally associate and attest to non-private data relating to the check (e.g. Instagram account name) by including it in the proof certificate data as key-value pairs. The proof certificate includes some standard information such as an identifier of the check performed, the wallet address of the user, a timestamp, and the identifier of the proof certificate on Silent Data, along with any extra non-private data. The enclave will then sign this certificate using an algorithm compatible with the target blockchain and send it to thedApp smart contract for persistence and verification. Substrate dApp smart contracts will initiate the Silent Data proof of Instagram account ownership check. The smart contracts will receive the proof certificate from the Silent Data enclave, and verify the attestations, proving that the owner of the wallet is the owner of an Instagram account, but proving they have direct access to login to the web2 account. diff --git a/applications/skynet-substrate-integration.md b/applications/skynet-substrate-integration.md index a1c3baeedb2..d38e26eb48b 100644 --- a/applications/skynet-substrate-integration.md +++ b/applications/skynet-substrate-integration.md @@ -144,7 +144,7 @@ The Skynet Labs team (recently renamed from Nebulous) was responsible for the de ## Development Status :open_book: -Preliminary research has been undertaken into the Polkadot ecosystem generally and substrate development specifically for the purposes of writing this proposal, along with coordinating with the Web3Foundation and Parity team member to make sure the the implementation plans and technical details were thorough and sensible. +Preliminary research has been undertaken into the Polkadot ecosystem generally and substrate development specifically for the purposes of writing this proposal, along with coordinating with the Web3Foundation and Parity team member to make sure theimplementation plans and technical details were thorough and sensible. ## Development Roadmap :nut_and_bolt: diff --git a/docs/RFPs/appi.md b/docs/RFPs/appi.md index 8e7c5eadf1e..c51250a6e41 100644 --- a/docs/RFPs/appi.md +++ b/docs/RFPs/appi.md @@ -24,7 +24,7 @@ Depends on Treasury Recurring Payouts: https://github.com/paritytech/substrate/i ## Overview -The payout, approved by a Council motion for a specific pool of nodes would go to the the [payout script](#payout-script) (identified as an address), then the pool would distribute the funds based on the [database](#database). +The payout, approved by a Council motion for a specific pool of nodes would go to the[payout script](#payout-script) (identified as an address), then the pool would distribute the funds based on the [database](#database). ## Deliverables :nut_and_bolt: