2020
DOI: 10.5195/ledger.2020.195
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Block Rate Targeting

Abstract: A proof-of-work blockchain uses a retargeting algorithm, also termed a difficulty adjustment algorithm, to manage the rate of block production in the presence of changing hashrate. To derive the parameters that guide the search for the next block, nearly all such algorithms rely on averages of past inter-block time observations, as measured by on-chain timestamps. We are motivated to seek better responsiveness to changing hashrate, while improving stability of the block production rate and retaining the progre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 2 publications
0
3
0
Order By: Relevance
“…Stone [12] was perhaps the first to suggest the notion of increasing the mining target during a single block interval in order to compensate for statistical tail events or a sudden loss of hash rate. Recently, Harding [4] introduced a new PoW consensus mechanism called RTT that leverages this idea. When mining a given block, instead of using a fixed target G, RTT varies the target as a function of the time since the last block.…”
Section: Rtt Miningmentioning
confidence: 99%
See 2 more Smart Citations
“…Stone [12] was perhaps the first to suggest the notion of increasing the mining target during a single block interval in order to compensate for statistical tail events or a sudden loss of hash rate. Recently, Harding [4] introduced a new PoW consensus mechanism called RTT that leverages this idea. When mining a given block, instead of using a fixed target G, RTT varies the target as a function of the time since the last block.…”
Section: Rtt Miningmentioning
confidence: 99%
“…Therefore, difficulty adjustment is essentially parameter estimation from sample T . Rather than directly estimating the scale of Weibull distributed T , Harding [4] opts to transform T to an exponentially distributed random variable T and estimate its scale instead. Because this transformation amplifies distortions due to hash rate fluctuations, he finds that a single sample is often sufficient to accurately update the difficulty.…”
Section: Difficulty Adjustmentmentioning
confidence: 99%
See 1 more Smart Citation