diff --git a/docs/architecture/gas/README.md b/docs/architecture/gas/README.md index bca63eaa46f..e03a625d2cb 100644 --- a/docs/architecture/gas/README.md +++ b/docs/architecture/gas/README.md @@ -148,7 +148,7 @@ shard. In terms of gas costs, each account is conceptually its own shard. This makes dynamic resharding possible without user-observable impact. When the send step is performed, the minimum required gas to start execution of -that action is known. Thus, if the receipt has not enough gas, it can be aborted +that action is known. Thus, if the receipt does not have enough gas, it can be aborted instead of forwarding it. Here we have to introduce the concept of used gas. `gas_used` is different from `gas_burnt`. The former includes the gas that needs @@ -181,7 +181,7 @@ smart contract) are charged only at the receiver. Thus, they have only one value to define them, in contrast to action costs. The most fundamental dynamic gas cost is `wasm_regular_op_cost`. It is -multiplied with the exact number of WASM operations executed. You can read about +multiplied by the exact number of WASM operations executed. You can read about [Gas Instrumentation](https://nomicon.io/RuntimeSpec/Preparation#gas-instrumentation) if you are curious how we count WASM ops. diff --git a/docs/architecture/how/README.md b/docs/architecture/how/README.md index 6b50bacf8b7..0e2586625e0 100644 --- a/docs/architecture/how/README.md +++ b/docs/architecture/how/README.md @@ -36,7 +36,7 @@ There are several important actors in neard: * `ClientActor` - Client actor is the “core” of neard. It contains all the main logic including consensus, block and chunk processing, state transition, garbage - collection, etc. Client actor is single threaded. + collection, etc. Client actor is single-threaded. * `ViewClientActor` - View client actor can be thought of as a read-only interface to **client**. It only accesses data stored in a node’s storage and does not mutate @@ -80,7 +80,7 @@ do today): received to reconstruct the chunks. For each chunk, 1/3 of all the parts (100) is sufficient to reconstruct a chunk. If new blocks arrive while waiting - for chunk parts, they will be put into a `OrphanPool`, waiting to be processed. + for chunk parts, they will be put into an `OrphanPool`, waiting to be processed. If a chunk part request is not responded to within `chunk_request_retry_period`, which is set to 400ms by default, then a request for the same chunk part would be sent again. diff --git a/docs/architecture/how/resharding.md b/docs/architecture/how/resharding.md index 26c6c3b655f..1a279a10dc8 100644 --- a/docs/architecture/how/resharding.md +++ b/docs/architecture/how/resharding.md @@ -89,10 +89,10 @@ The state sync of the parent shard, the resharing and the catchup of the childre ### Flow -The resharding will be initiated by having it included in a dedicated protocol version together with neard . Here is the expected flow of events: +The resharding will be initiated by having it included in a dedicated protocol version together with neard. Here is the expected flow of events: * A new neard release is published and protocol version upgrade date is set to D, roughly a week from the release. -* All node operatores upgrade their binaries to the newly released version within the given timeframe, ideally as soon as possible but no later than D. +* All node operators upgrade their binaries to the newly released version within the given timeframe, ideally as soon as possible but no later than D. * The protocol version upgrade voting takes place at D in an epoch E and nodes vote in favour of switching to the new protocol version in epoch E+2. * The resharding begins at the beginning of epoch E+1. * The network switches to the new shard layout in the first block of epoch E+2. @@ -147,8 +147,8 @@ Here is an example of what that may look like in a grafana dashboard. Please kee The resharding process can be quite resource intensive and affect the regular operation of a node. In order to mitigate that as well as limit any need for increasing hardware specifications of the nodes throttling was added. Throttling slows down resharding to not have it impact other node operations. Throttling can be configured by adjusting the resharding_config in the node config file. -* batch_size - controls the size of batches in which resharding moves data around. Setting a smaller batch size will slow down the resharding process and make it less resource consuming. -* batch_delay - controls the delay between processing of batches. Setting a smaller batch delay will speed up the resharding process and make it more resource consuming. +* batch_size - controls the size of batches in which resharding moves data around. Setting a smaller batch size will slow down the resharding process and make it less resource-consuming. +* batch_delay - controls the delay between processing of batches. Setting a smaller batch delay will speed up the resharding process and make it more resource-consuming. The remaining fields in the ReshardingConfig are only intended for testing purposes and should remain set to their default values. @@ -173,4 +173,4 @@ The dynamic resharding would mean that the network itself can automatically dete ### Support different changes to shard layout -The current implementation only supports splitting a shard. In the future we can consider adding support of other operations such as merging two shards or moving an existing boundary account. +The current implementation only supports splitting a shard. In the future we can consider adding support for other operations such as merging two shards or moving an existing boundary account. diff --git a/docs/architecture/how/tx_receipts.md b/docs/architecture/how/tx_receipts.md index 347206fc75c..584a5debe1b 100644 --- a/docs/architecture/how/tx_receipts.md +++ b/docs/architecture/how/tx_receipts.md @@ -8,7 +8,7 @@ In this article, we’ll cover what happens next: How it is changed into a receipt and executed, potentially creating even more receipts in the process. -First, let’s look at the ‘high level view’: +First, let’s look at the ‘high-level view’: ![image](https://user-images.githubusercontent.com/1711539/198282472-3883dcc1-77ca-452c-b21e-0a7af1435ede.png) @@ -24,7 +24,7 @@ contract) - they are created by the block/chunk producers. ## Life of a Transaction -If we ‘zoom-in', the chunk producer's work looks like this: +If we ‘zoom-in‘, the chunk producer's work looks like this: ![image](https://user-images.githubusercontent.com/1711539/198282518-cdeb375e-8f1c-4634-842c-6490020ad9c0.png) diff --git a/docs/architecture/network.md b/docs/architecture/network.md index d01db3d67ff..a9fc35fcdb5 100644 --- a/docs/architecture/network.md +++ b/docs/architecture/network.md @@ -440,7 +440,7 @@ This section describes different protocols of sending messages currently used in ## 10.1 Messages between Actors. -`Near` is build on `Actix`'s `actor` +`Near` is built on `Actix`'s `actor` [framework](https://actix.rs/docs/actix/actor). Usually each actor runs on its own dedicated thread. Some, like `PeerActor` have one thread per each instance. Only messages implementing `actix::Message`, can be sent @@ -484,7 +484,7 @@ Then it will use the `routing_table`, to find the route to the target peer (add When Peer receives this message (as `PeerMessage::Routed`), it will pass it to PeerManager (as `RoutedMessageFrom`), which would then check if the message is -for the current `PeerActor`. (if yes, it would pass it for the client) and if +for the current `PeerActor`. (if yes, it would pass it to the client) and if not - it would pass it along the network. All these messages are handled by `receive_client_message` in Peer. diff --git a/docs/architecture/storage/flat_storage.md b/docs/architecture/storage/flat_storage.md index c3c1fa37351..2ba165c5745 100644 --- a/docs/architecture/storage/flat_storage.md +++ b/docs/architecture/storage/flat_storage.md @@ -146,7 +146,7 @@ all these seemingly-simple but actually-really-hard problems. ## Flat state for writes How to use flat storage for writes is not fully designed, yet, but we have some -rough ideas how to do it. But we don't know the performance we should expect. +rough ideas on how to do it. But we don't know the performance we should expect. Algorithmically, it can only get worse but the speedup on RocksDB we found with the read-only flat storage is promising. But one has to wonder if there are not also simpler ways to achieve better data locality in RocksDB. @@ -181,7 +181,7 @@ shard for some set of blocks. It is shared by multiple threads, so it is guarded * View client (not fully decided) Requires ChainAccessForFlatStorage on creation because it needs to know the tree of blocks after -the flat storage head, to support get queries correctly. +the flat storage head, to support getting queries correctly. ## FlatStorageManager