Replies: 8 comments 35 replies
-
T.I.D.E.S. |
Beta Was this translation helpful? Give feedback.
-
Below a first sketch of what such extension is and how it could work: Pool accountability systemIntroProvide a way for a miner to verify that the pool is calculating the payouts correctly. This
How it could work
Verification
Share orderingTo calculate the payout, the order in which shares are arranged (share1, share2, etc.) is important. The pool cannot mess up the order because it is broadcasting every valid share. If a miner goes |
Beta Was this translation helpful? Give feedback.
-
I have a dumb question from a theoretical perspective. I've been reading the 2011 Rosenfeld paper linked on the top of this Discussion. The paper seems to set the definition of a share around a fixed difficulty target, which is the minimum possible value (1).
The paper goes on to explain multiple payout schemes based on this definition, including the most popular ones used in the industry nowadays (PPS, PPLNS). All those explanations seem to be based on this definition of share that's actually based on a minimum difficulty. However, that doesn't seem sustainable. Maybe at the time when the paper was written (2011), it was ok for multiple miners to be sending shares of difficulty 1 from their CPUs and GPUs to a pool. But nowadays, with ASIC farms, this would basically be a DDoS attack against the pool, because each miner would be submitting an astronomically high number of shares per second. Also, the SRI Proxy implementations perform a difficulty adjustment algorithm for each miner, so that they only submit shares above an optimal difficulty threshold. So my question is: how do payout schemes like PPLNS take this per-miner difficulty variability into account? |
Beta Was this translation helpful? Give feedback.
-
This proposal is kind-of halfway between SV2 and Braidpool. It can't enforce payouts but could detect bad pools. As such here are a few thoughts from my learnings about Braidpool: You need a consensus system on the database of shares. If one miner misses one share submission, it looks to him like the pool is cheating, and without consensus you can't prove fraud. There are many ways to do this but you'll have to select one to ensure that all miners auditing the shares receive all shares. The DAG of Braidpool isn't the only way to do this, you could use another BFT replicated database, but it's a complicated topic, and bandwidth requirements are considerable as the share target difficulty gets smaller. I proposed "sub-pools" in part to address this bandwidth problem. The speed (block rate) at which we can run the DAG is fundamentally given by the latency of the network (as measured by diamond graphs and other higher order DAG structures) but ultimately we have no control over that. The "natural fastest speed" we can run the DAG is around 1000x faster than bitcoin. (Sub-second share blocks) Many networks achieve this in practice including Kaspa, Avalanche, and some asynchronous BFT databases. But a 1000x more shares than blocks isn't really fast enough for a centralized pool, especially with individual device monitoring. Estimates are that there are around 6,000,000 individual mining devices on the network, so we need at least another factor of 1000 in order to get one share from each device per bitcoin block, and you probably want even more than that. The idea of sub-pools is to have two levels of accounting, one at the 1000 shares/block rate, and miners for whom that isn't fast enough join a sub-pool, that rolls up into a single share in the parent pool. In the case of SV2 you might call this sub-accounting since there would only be one centralized pool here, you wouldn't actually be creating new pools but just dividing up the accounting responsibility. There can be several sub-pools, and sub-sub pools. Thus the smallest miners using a sub-sub pool would need to audit 3 levels of share accounting (and 3x bandwidth), instead of everyone needing to audit all shares of the smallest miners (which is multiplicatively more bandwidth). Bandwidth is saved by not auditing sub-pools you're not a part of. Of course this factor of 1000 is decided by considerations that SV2 may not care about, and fundamentally this is a logarithmic tree of sub-pools. You could choose a smaller branching factor based on your considerations, with more depth in sub-sub-sub pools instead. For instance, optimize to minimize bandwidth while having "enough" miners in each sub-pools to ensure effective auditing. This could be a pretty interesting way to collaborate too. :-D |
Beta Was this translation helpful? Give feedback.
-
I think there's a simpler way to do this. In the above Merkle Tree proposal, the pool can simply omit some shares from its Merkle tree, possibly through private agreement with a miner. HOWEVER it would still be an improvement to prove to the miner that his shares were included. This can be done automatically by the StratumV2 software, which substantially improves the auditing over historical pools (I think Slush used to do this) which required the miner to spot-check. Herein a miner is only verifying his own shares. What the pool needs to do is create a cryptographic accumulator of shares, along with a sum rule that adds up the shares and ensures that there are no negative entries, and publish the root of this accumulator. This is basically a Merkle Sum Tree, but since you don't really want to send the shares of all miners to all other miners, you can make a ZKP instead. I've proposed using CurveTrees for such things, which can use the Bulletproof ZK system to make (and accumulate) arbitrary proofs about the data in the accumulator. The difficulty for this particular application will be proving that the shares in the accumulator are valid. You need to validate a bitcoin header in ZK, extract the coinbase payout to the pool, and then apply the sum rule to the coinbase payouts. It can be done with CurveTrees and bulletproofs. It might be interesting to see if the Starkware guys have any interest in this... if the level of ZK-fu among StratumV2 devs is low... Done correctly, this provides a lower bound on the hashrate of the pool. (Assuming the bitcoin header can be validated properly, and it's impossible to put "fake" hashrate into the accumulator). It can't provide an upper bound because as mentioned the pool can just omit some hashrate and you'll never know. |
Beta Was this translation helpful? Give feedback.
-
https://delvingbitcoin.org/t/ecash-tides-using-cashu-and-stratum-v2/870 |
Beta Was this translation helpful? Give feedback.
-
I spent the weekend investigating the Double Geometric Method. With a score-based payout method, every time that a share S is sent to the pool, then score of the miner that produced the share has his score increased by DGM seems very interesting, because there is no need to set a time-window for memorizing shares as with PPLNS. Also, there are some variables that can be adjusted in order to reduce pool-based variance and share-based variance. On the other hand, I couldn't see that with DGM is not possible to perform some pool-hopping based on difficulty adjustments (this is basically because I had issue figuring out why the expected payout per share does not depend on difficulty, I also opened |
Beta Was this translation helpful? Give feedback.
-
Outline a new role to enable auditing of shares. |
Beta Was this translation helpful? Give feedback.
-
Double Geometric Method
The in-depth description can be found at Meni Rosenfeld, 2011, Analysis of Bitcoin Pooled Mining Reward
Systems (chapter 5.1, page 18)
cc @marathon-gary @Fi3 @lorbax @GitGab19 @pavlenex
Beta Was this translation helpful? Give feedback.
All reactions