diff --git a/docs/0g-chain.md b/docs/0g-chain.md
index aeeb1cd..088f1d6 100644
--- a/docs/0g-chain.md
+++ b/docs/0g-chain.md
@@ -7,9 +7,9 @@ sidebar_position: 5
# 0G Chain: The Fastest Modular AI Chain
---
-0G Chain is a highly scalable, AI-optimized L1 blockchain designed to meet the needs of data-heavy applications. Built with a modular architecture, it allows for the independent optimization of key components like consensus, execution, and storage—making it ideal for AI-driven workflows. 0G is fully **EVM-compatible**, so decentralized applications (dApps) already deployed on other L1 or L2 chains (such as Ethereum or rollups) can easily leverage 0G's products without needing to migrate entirely.
+0G is the largest AI Layer 1 ecosystem, powered by the fastest DeAIOS (decentralized AI Operating System). With $325M raised, 0G is the ultimate home for DeAI. Built with a modular architecture, it allows for the independent optimization of key components like consensus, execution, and storage—making it ideal for AI-driven workflows. 0G is fully **EVM-compatible**, so decentralized applications (dApps) already deployed on other L1 or L2 chains (such as Ethereum or rollups) can easily leverage 0G's products without needing to migrate entirely.
-0G Chain supports a [data availability network](./da/0g-da.md), [distributed storage network](0g-storage.md), and [AI compute network](0g-compute.md). All of these networks integrate with 0G Chain's highly scalable consensus network, built to handle massive data volumes suitable for AI.
+0G Chain supports a [data availability network](./da/0g-da.md), [distributed storage network](0g-storage.md), and [AI compute network](0g-compute.md). These networks integrate with 0G Chain's highly scalable consensus network, which is built to handle massive data volumes suitable for AI.
As the demand for network capacity increases, new consensus networks can be added to enable horizontal scalability, thereby boosting the overall bandwidth and performance of the system. By **decoupling data publication from data storage**, 0G optimizes both throughput and scalability, surpassing the limitations seen in existing data availability (DA) solutions.
diff --git a/docs/0g-compute.md b/docs/0g-compute.md
index a78046d..97b6ac9 100644
--- a/docs/0g-compute.md
+++ b/docs/0g-compute.md
@@ -11,17 +11,17 @@ In today's world, AI models are transforming industries, driving innovation, and
## What is 0G Compute Network?
-The 0G Compute Network is a decentralized framework that provides AI computing capabilities to our community. It forms a crucial part of dAIOS and, together with the storage network, offers comprehensive end-to-end support for dApp development and services.
+The 0G Compute Network is a decentralized framework that provides AI computing capabilities to our community. It forms a crucial part of deAIOS and, together with the storage network, offers comprehensive end-to-end support for dApp development and services.
-The first iteration focuses specifically on decentralized settlement for inference, connecting buyers (who want to use AI models) with sellers (who run these models on their GPUs) in a trustless, transparent manner. Sellers, known as service providers, are able to set the price for each model they support and be rewarded real-time for their contributions. It’s a fully decentralized marketplace that eliminates the need for intermediaries, redefining how AI services are accessed and delivered and making them cheaper, more efficient, and accessible to anyone, anywhere.
+The first iteration focuses specifically on decentralized settlement for inference, connecting buyers (who want to use AI models) with sellers (who run these models on their GPUs) in a trustless, transparent manner. Sellers, known as service providers, can set the price for each model they support and receive real-time rewards for their contributions. It's a fully decentralized marketplace that eliminates the need for intermediaries, redefining how AI services are accessed and delivered by making them cheaper, more efficient, and accessible to anyone, anywhere.
## How does it work?
-The 0G Compute Network contract facilitates secure interactions between users (AI buyers) and service providers (GPU owners running AI models), ensuring smooth data retrieval, fee collection, and service execution. Here’s how it works:
+The 0G Compute Network contract facilitates secure interactions between users (AI buyers) and service providers (GPU owners running AI models), ensuring smooth data retrieval, fee collection, and service execution. Here's how it works:
1. **Service Provider Registration:** Service providers first register the type of AI service they offer (e.g., model inference) and set pricing for each type within the smart contract.
2. **User Pre-deposits Fees:** When a user wants to access a service, they pre-deposit a fee into the smart contract associated with the selected service provider. This ensures that funds are available to compensate the service provider.
-3. **Request and Response System:** Users send requests for AI inference, and the service provider decides whether to respond based on the sufficiency of the user’s remaining balance. Both the user and the provider sign each request and response, ensuring trustless verification of transactions.
+3. **Request and Response System:** Users send requests for AI inference, and the service provider decides whether to respond based on the sufficiency of the user's remaining balance. Both the user and the provider sign each request and response, ensuring trustless verification of transactions.
Here are some of the key features of the system:
diff --git a/docs/0g-storage.md b/docs/0g-storage.md
index e1880d1..0033539 100644
--- a/docs/0g-storage.md
+++ b/docs/0g-storage.md
@@ -9,16 +9,16 @@ sidebar_position: 3
## Intro to Storage Systems
-Storage systems play a critical role in managing, organizing, and ensuring the accessibility of data. In traditional systems, data is often stored centrally, creating risks around availability, censorship, and data loss due to single points of failure. Decentralized storage, on the other hand, addresses these issues by distributing data across a network of nodes, enhancing security, resilience, and scalability. This decentralization is essential, especially in an era where massive data sets are generated and consumed by AI, Web3 applications, and large-scale businesses.
+Storage systems play a critical role in managing, organizing, and ensuring the accessibility of data. In traditional systems, data is often stored centrally, creating risks around availability, censorship, and data loss due to single points of failure. Decentralized storage addresses these issues by distributing data across a network of nodes, enhancing security, resilience, and scalability. This decentralization is essential, especially in an era where massive datasets are generated and consumed by AI, Web3 applications, and large-scale businesses.
## 0G's Storage System
-0G Storage is a distributed data storage system designed with on-chain elements to incentivize storage nodes to store data on behalf of a user. Anyone can run a storage node and receive rewards for maintaining one. For information on how to do so, check out our guide [here](./run-a-node/storage-node.md).
+0G Storage is a distributed data storage system designed with on-chain elements to incentivize storage nodes to store data on behalf of users. Anyone can run a storage node and receive rewards for maintaining one. For information on how to do so, check out our guide [here](./run-a-node/storage-node.md).
-0G's system itself has two parts:
+0G's system consists of two parts:
-1. **The Data Publishing Lane:** Ensures data availability by allowing quick queries and verification through the 0G Consensus network. This ensures that the data stored can be easily accessed and validated by users or applications when needed.
-2. **The Data Storage Lane:** Manages large data transfers and storage, utilizing an erasure-coding mechanism. This splits data into smaller, redundant fragments distributed across different nodes, guaranteeing data recovery in case of node failure or downtime.
+1. **The Data Publishing Lane:** Ensures data availability by allowing quick queries and verification through the 0G Consensus network. This ensures that stored data can be easily accessed and validated by users or applications when needed.
+2. **The Data Storage Lane:** Manages large data transfers and storage using an erasure-coding mechanism. This splits data into smaller, redundant fragments distributed across different nodes, guaranteeing data recovery even if some nodes fail or experience downtime.
For any party wishing to store data with 0G, the data must first be provided alongside payment using 0G's token, which is fully embedded into 0G's main chain. To store this data, it is first *erasure-coded,* meaning that the data being stored is fragmented into redundant smaller pieces distributed across multiple storage locations. Redundancy is essential as it ensures the data can always be recovered, even if some storage locations fail or become unavailable.
diff --git a/docs/build-with-0g/compute-network/overview.md b/docs/build-with-0g/compute-network/overview.md
index 7e7633b..33c2a57 100644
--- a/docs/build-with-0g/compute-network/overview.md
+++ b/docs/build-with-0g/compute-network/overview.md
@@ -12,11 +12,11 @@ We integrate various stages of the AI process to make these services both verifi
## Components
-**Contract:** This component determines the legitimacy of settlement proofs, manages accounts, and handles service information. To do this, it stores variables during the service process, including account information, service details (such as name and URL), and consensus logic.
+**Contract:** This component determines the legitimacy of settlement proofs, manages accounts, and handles service information. It stores variables during the service process, including account information, service details (such as name and URL), and consensus logic.
**Provider:** The owners of AI models and hardware who offer their services for a fee.
-**User:** Individuals or organizations who use the services listed by Service Providers. They may also use AI services directly or build applications on top of our API.
+**User:** Individuals or organizations who use the services listed by Service Providers. They may use AI services directly or build applications on top of our API.
## Process Overview
diff --git a/docs/build-with-0g/explorer.md b/docs/build-with-0g/explorer.md
index a9f2c09..0ce6f2f 100644
--- a/docs/build-with-0g/explorer.md
+++ b/docs/build-with-0g/explorer.md
@@ -5,22 +5,20 @@ title: 0G Explorers
# 0G Explorers: Visualize the Blockchain Ecosystem
---
-Our explorers are designed to give you a user-friendly window into the 0G ecosystem. By offering powerful visualization tools, they eliminate the need for complex CLI commands, making it easier for anyone to understand what's happening on-chain. A well-designed UI enhances your experience, allowing you to focus on insights without having to know specific commands.
+Our explorers provide a user-friendly window into the 0G ecosystem. By offering powerful visualization tools, they eliminate the need for complex CLI commands, making it easier for anyone to understand what's happening on-chain. A well-designed UI enhances your experience, allowing you to focus on insights rather than memorizing specific commands.
-Explore the 0G Ecosystem with our powerful scanning tools. These explorers provide detailed insights into storage and chain activities, helping you understand and analyze the network effectively.
+These explorers provide detailed insights into storage and chain activities, helping you understand and analyze the network effectively.
-## [Storage Scan](https://storagescan-newton.0g.ai/)
+### [Storage Scan](https://storagescan-newton.0g.ai/)
-Storage Scan is your go-to tool for exploring storage-related activities within the network.
-
-Use Storage Scan to:
+Storage Scan is your go-to tool for exploring storage-related activities within the network. Use Storage Scan to:
- Track data storage and retrieval activities
- Analyze storage capacity and utilization across the network
+- View detailed transaction histories and storage metrics
-## [Chain Scan](https://chainscan-newton.0g.ai)
+### [Chain Scan](https://chainscan-newton.0g.ai)
Chain Scan provides a comprehensive view of 0G chain activity and transactions.
-
Use Chain Scan to:
- View real-time blockchain transactions
- Explore blocks, addresses, and smart contracts
@@ -28,9 +26,7 @@ Use Chain Scan to:
## [Nodes Guru](https://testnet.0g.explorers.guru/)
-Nodes Guru provides key information and monitoring tools for validators and node operators to track the health and performance of the network.
-
-Use Nodes Guru to:
+Nodes Guru provides key information and monitoring tools for validators and node operators to track the health and performance of the network. Use Nodes Guru to:
- View Validator performance metrics like uptime, block proposals, and missed blocks
- Obtain staking and rewards info
- Participate in governance proposals and vote on network upgrades or changes
diff --git a/docs/build-with-0g/faucet.md b/docs/build-with-0g/faucet.md
index f791c1d..7d3761c 100644
--- a/docs/build-with-0g/faucet.md
+++ b/docs/build-with-0g/faucet.md
@@ -14,6 +14,7 @@ The 0G Faucet provides free testnet tokens, essential for interacting with the 0
## How to Get Testnet Tokens
- **Use the Faucet Website:** The easiest way to request tokens is through our [faucet website](https://faucet.0g.ai). Each user can receive up to 1 A0GI token per day, which is sufficient for most testing needs.
+- **Request via thirdweb:** From [thirdweb faucet](https://thirdweb.com/0g-newton-testnet?utm_source=0g&utm_medium=docs), Each user can receive up to 0.01 A0GI token per day.
- **Request via Discord:** If you require more than 1 A0GI token per day, please reach out in our vibrant [Discord community](https://discord.com/invite/0glabs) to request additional tokens.
:::important
diff --git a/docs/build-with-0g/rollups-and-appchains/arbitrum-nitro-on-0g-da.md b/docs/build-with-0g/rollups-and-appchains/arbitrum-nitro-on-0g-da.md
index ad698b5..84c4638 100644
--- a/docs/build-with-0g/rollups-and-appchains/arbitrum-nitro-on-0g-da.md
+++ b/docs/build-with-0g/rollups-and-appchains/arbitrum-nitro-on-0g-da.md
@@ -28,4 +28,4 @@ Find the [repository for this integration](https://github.com/0glabs/nitro) at h
- [0G Arbitrum Nitro Rollup Kit](https://github.com/0glabs/nitro)
-WARNING:This is a beta integration and we are working on resolving open issues.
+> **WARNING:** This is a beta integration and we are working on resolving open issues.
diff --git a/docs/build-with-0g/rollups-and-appchains/op-stack-on-0g-da.md b/docs/build-with-0g/rollups-and-appchains/op-stack-on-0g-da.md
index 2624d60..dc1c15f 100644
--- a/docs/build-with-0g/rollups-and-appchains/op-stack-on-0g-da.md
+++ b/docs/build-with-0g/rollups-and-appchains/op-stack-on-0g-da.md
@@ -68,8 +68,7 @@ The Optimism codebase has been extended to integrate with the 0G DA `da-server`.
-0G DA DA-server accepts the following flags for 0G DA storage using
-[0G DA Open API](https://docs.0g.ai/0g-doc/docs/0g-da/rpc-api/api-1)
+0G DA DA-server accepts the following flags for 0G DA storage using 0G DA Open API
````
--zg.server (default: "localhost:51001")
diff --git a/docs/build-with-0g/storage-cli.md b/docs/build-with-0g/storage-cli.md
index 5466610..c60d4fd 100644
--- a/docs/build-with-0g/storage-cli.md
+++ b/docs/build-with-0g/storage-cli.md
@@ -120,7 +120,7 @@ The command-line help listing is reproduced below for your convenience. The same
**Important Considerations**
* **Contract Addresses:** You need the accurate contract addresses for the 0G log contract on the specific blockchain you are using. You can find these on the 0G Storage explorer or in the official documentation.
-* **File Root Hash:** To download a file, you must have its root hash. This is provided when you upload a file or can be found by looking up your transaction on the 0G Storage explorer ([https://storagescan-newton.0g.ai/](https://storagescan-newton.0g.ai/)).
+* **File Root Hash:** To download a file, you must have its root hash. This is provided when you upload a file or can be found by looking up your transaction on the [0G Storage explorer](https://storagescan-newton.0g.ai/).
* **Storage Node RPC Endpoint:** You can use the team-deployed storage node or run your own node for more control and the potential to earn rewards.
diff --git a/docs/da/0g-da-deep-dive.md b/docs/da/0g-da-deep-dive.md
index bc01ab7..b40b91f 100644
--- a/docs/da/0g-da-deep-dive.md
+++ b/docs/da/0g-da-deep-dive.md
@@ -5,9 +5,9 @@ sidebar_position: 2
---
# 0G DA Technical Deep Dive
-The Data Availability (DA) module allows users to submit a piece of data, referred to as a _**DA blob**_. This data is redundantly encoded by the client's proxy and divided into several slices, which are then sent to DA nodes. _**DA nodes**_ gain eligibility to verify the correctness of DA slices by staking. Each DA node verifies the integrity and correctness of its slice and signs it. Once more than 2/3 of the aggregated signatures are on-chain, the data behind the related hash is considered to be decentralized published.
+The Data Availability (DA) module allows users to submit a piece of data, referred to as a _**DA blob**_. This data is redundantly encoded by the client's proxy and divided into several slices, which are then sent to DA nodes. _**DA nodes**_ gain eligibility to verify the correctness of DA slices by staking. Each DA node verifies the integrity and correctness of its slice and signs it. Once more than 2/3 of the aggregated signatures are on-chain, the data behind the related hash is considered to be published decentrally.
-To incentivize DA nodes to store the signed data for a period, the signing process itself does not provide any rewards. Instead, rewards are distributed through a process called _**DA Sampling**_. During each DA Sample round, any DA slice within a certain time frame can generate a lottery chance for a reward. DA nodes need to actually store the corresponding slice to redeem the lottery chance and claim the reward.
+To incentivize DA nodes to store the signed data for a period, the signing process itself does not provide any rewards. Instead, rewards are distributed through a process called _**DA Sampling**_. During each DA Sample round, any DA slice within a certain timeframe can generate a lottery chance for a reward. DA nodes must store the corresponding slice to redeem the lottery chance and claim the reward.
The process of generating DA nodes is the same as the underlying chain's PoS process, both achieved through staking. During each DA epoch (approximately 8 hours), DA nodes are assigned to several quorums. Within each quorum, nodes are assigned numbers 0 through 3071. Each number is assigned to exactly one node, but a node may be assigned to multiple quorums, depending on its staking weight.
@@ -84,11 +84,10 @@ Admin adjustable parameters
During each period, each DA slice (one row) can be divided into 32 sub-lines. For each sub-line, the `podasQuality` will be computed using the `dataRoot` and assigned `epoch` and `quorumId` of its corresponding DA blob.
-\
+The hash value can be viewed interchangeably as either 32 bytes of data or a 256-bit big-endian integer.
```python
lineQuality = keccak256(sampleSeed, epoch, quorumId, dataRoot, lineIndex);
diff --git a/docs/da/0g-da.md b/docs/da/0g-da.md
index cfea36a..73f7f81 100644
--- a/docs/da/0g-da.md
+++ b/docs/da/0g-da.md
@@ -9,15 +9,15 @@ sidebar_position: 1
## The Rise of Data Availability Layers
-Data availability (DA) refers to proving that data is readily accessible, verifiable, and retrievable. For example, Layer 2 rollups such as Arbitrum or Base reduce the burden from Ethereum by handling transactions off-chain and then publishing the data back to Ethereum, thereby freeing up L1 Ethereum throughput and reducing gas. The transaction data, however, still needs to be made available so that anyone can validate or challenge the transactions through fraud-proofs during the challenge-period.
+Data availability (DA) refers to proving that data is readily accessible, verifiable, and retrievable. For example, Layer 2 rollups such as Arbitrum or Base reduce the burden on Ethereum by handling transactions off-chain and then publishing the data back to Ethereum, thereby freeing up L1 throughput and reducing gas costs. The transaction data, however, still needs to be made available so that anyone can validate or challenge the transactions through fraud proofs during the challenge period.
As such, DA is crucial to blockchains as it allows for full validation of the blockchain's history and current state by any participant, thus maintaining the decentralized and trustless nature of the network. Without this, validators would not be able to independently verify the legitimacy of transactions and blocks, leading to potential issues like fraud or censorship.
-This led to the arrival of *Data Availability Layers (DALs)*, which provide a significantly more efficient manner of storing and verifying data than publishing directly to Ethereum. DALs are critical for several reasons:
+This led to the emergence of Data Availability Layers (DALs), which provide a significantly more efficient manner of storing and verifying data than publishing directly to Ethereum. DALs are critical for several reasons:
* **Scalability**: DALs allow networks to process more transactions and larger datasets without overwhelming the system, reducing the burden on network nodes and significantly enhancing network scalability.
-* **Increased Efficiencies**: DALs optimize how and where data is stored and made available, increasing data throughput and reducing latency while also minimizing associated costs.
-* **Interoperability & Innovation**: DALs that can interact with multiple ecosystems allow for fast and highly secure interoperability for data and assets.
+* **Increased Efficiency**: DALs optimize how and where data is stored and made available, increasing data throughput and reducing latency while also minimizing associated costs.
+* **Interoperability & Innovation**: DALs that can interact with multiple ecosystems enable fast and highly secure interoperability for data and assets.
However, it's worth noting that not all DALs are built equally.
@@ -43,13 +43,13 @@ There are 4 differentiators of 0G worth highlighting:
2. **Modular and Layered Architecture**: 0G's design decouples storage, data availability, and consensus, allowing each component to be optimized for its specific function. Data availability is ensured through redundancy, with data distributed across decentralized Storage Nodes. Cryptographic proofs (like Merkle trees and zk-proofs) verify data integrity at regular intervals, automatically replacing nodes that fail to produce valid proofs. And combined with 0G's ability to keep adding new consensus networks that scale with demand, 0G can scale efficiently and is ideal for complex AI workflows and large-scale data processing.
-3. **Decentralized AI Operating System & High Throughput**: 0G is the first decentralized AI operating system (dAIOS) designed to give users control over their data, while providing the infrastructure necessary to handle the massive throughput demands of AI applications. Beyond its modular architecture and infinite consensus layers, 0G achieves high throughput through parallel data processing, enabled by erasure coding, horizontally scalable consensus networks, and more. With a demonstrated throughput of 50 Gbps on the Newton Testnet, 0G seamlessly supports AI workloads and other high-performance needs, including training large language models and managing AI agent networks.
+3. **Decentralized AI Operating System & High Throughput**: 0G is the first decentralized AI operating system (deAIOS) designed to give users control over their data, while providing the infrastructure necessary to handle the massive throughput demands of AI applications. Beyond its modular architecture and infinite consensus layers, 0G achieves high throughput through parallel data processing, enabled by erasure coding, horizontally scalable consensus networks, and more. With a demonstrated throughput of 50 Gbps on the Newton Testnet, 0G seamlessly supports AI workloads and other high-performance needs, including training large language models and managing AI agent networks.
These differentiators make 0G uniquely positioned to tackle the challenges of scaling AI on a decentralized platform, which is critical for the future of Web3 and decentralized intelligence.
## How Does This Work?
-As covered in #0G_Storage, data within the 0G ecosystem is first erasure-coded and split into "data chunks," which are then distributed across various Storage Nodes in the 0G Storage network.
+As covered in [0G Storage](./../0g-storage.md), data within the 0G ecosystem is first erasure-coded and split into "data chunks," which are then distributed across various Storage Nodes in the 0G Storage network.
To ensure data availability, 0G uses **Data Availability Nodes** that are randomly chosen using a Verifiable Random Function (VRF). A VRF generates random values in a way that is unpredictable yet verifiable by others, which is important as it prevents potentially malicious nodes from collusion.
@@ -60,9 +60,9 @@ The consensus mechanism used by 0G is fast and efficient due to its sampling-bas
-Validators in the 0G Consensus network, who are separate from the DA nodes, verify and finalize these proofs. Although DA nodes ensure data availability, they do not directly participate in the final consensus process, which is the responsibility of 0G validators. Validators use a shared staking mechanism where they stake $0G tokens on a primary network (likely Ethereum). Any slashable event across connected networks leads to slashing on the main network, securing the system's scalability while maintaining robust security.
+Validators in the 0G Consensus network, who are separate from the DA nodes, verify and finalize these proofs. Although DA nodes ensure data availability, they do not directly participate in the final consensus process, which is the responsibility of 0G validators. Validators use a shared staking mechanism where they stake 0G tokens on a primary network (likely Ethereum). Any slashable event across connected networks leads to slashing on the main network, securing the system's scalability while maintaining robust security.
-This is a key mechanism that allows for the system to scale infinitely while maintaining data availability. In return, validators engaged in shared staking receive $0G tokens on any network managed, which can then be burnt in return for $0G tokens on the mainnet.
+This is a key mechanism that allows for the system to scale infinitely while maintaining data availability. In return, validators engaged in shared staking receive 0G tokens on any network managed, which can then be burnt in return for 0G tokens on the mainnet.