Bitcoin Whitepaper

Thoughts on the Bitcoin whitepaper 10 years later?

based on this statement: “…Once a predetermined number of coins have entered circulation, the incentive can transition entirely to transaction fees and be completely inflation free.”, then It could it be the case that a significant percentage of miners will lose interest in maintaining their CPU hierarchy? If this is not the case I can only assume that for them to maintain interest in this, then transaction fees have to be high enough. But won’t this affect the demand of bitcoin users, given that it is highly likely that transaction fees will increase in this case?

Its a good question. I think people assume that the base layer (layer 1) transaction fees will still be worth paying for very large transactions since at the time in which the last bitcoin is mined the price if a single btc will be 100x+ higher than it is currently.

Most transaction below a certain size (1btc @ $100k?) will probably move to layer 2/3 channels like the Lightning network, thus eliminating or drastically minimizing the base layer fees paid to miners for providing security (hash power) to the base chain.

2 Likes

woah, didn’t even know that exists. That’s awesome :+1::+1:

1 Like

I continue to find it interesting that Disk Space was something that was thought of when this Whitepaper was written (Section 7). I know that seems odd but coming from a System Administrator background disk space is something that we have to think about continually. Typically a lot of people just assume you will stick another hard drive in and keep going, but that isn’t always the case. The idea with using Merkle Trees to help reduce that amount of space needed is something that is really honestly kinda cool. I would be interested in seeing how they plan to handle this as it goes on further. If Blockchain continues to grow and if it becomes more popular then it would seem as though there is the potential for the simple calculation that was done (80 bytes per block per 10 minutes) may change some.

1 Like

It’s a good insight…though as you’ll soon find out, running a full blockchain node still takes up a hellofalot of disk space these days…

If you subscribe to the ‘Satoshi must be a time traveling alien’ sent from the past theory, than we can only assume that he/she/it was able to foresee improvements disk space and also safeguard against any un-forseen advances in quantum computing

1 Like

Creating a coin is a work, verifying a transaction is also a work. How does a miner gain Pow for verification?

Are you asking about the mechanism for rewarding the coins to the winning miner for each block?

"They vote with their CPU power, expressing their acceptance of
valid blocks by working on extending them and rejecting invalid blocks by refusing to work on
them. "

The verification of a chain involves running through the blocks in a chain, and confirming that the chain of blocks are truly connected based on the hashes on each block. Since the hash generation has a consistent output given a set of inputs, you can manually run the hash algorithm on each block in the chain, and compare it to the ‘previous hash’ of the block that comes after. If they match up, then keep going until you get to the root. At that point you have verified the chain.

1 Like

The proof of work section provided a lot of clarity around how the longest chain works when two competing miners are working to verify a block of transactions. From my initial understanding before reading the whitepaper, a history of transaction is on every node, what I didnt know is that, the bulk of it doesnt matter, only the earliest transaction really matters and that starts the longest chain method. The efficiency of it is when the other nodes that holds the ledger agreed that “earliest transaction” is true.

1 Like

For our timestamp network, we implement the proof-of-work by incrementing a nonce in the block until a value is found that gives the block’s hash the required zero bits.

Couldn’t there be multiple nonces that solve for the value and win the PoW? In that case, is it the first-found nonce and block that gets built upon on most quickly that succeeds? Also, I assume miners do not do this by incrementing, but rather choosing nonces randomly?

The proof-of-work also solves the problem of determining representation in majority decision making. If the majority were based on one-IP-address-one-vote, it could be subverted by anyone able to allocate many IPs.

I’m not sure I get this… Since one can now spin up tons of powerful machines through services like AWS with a credit card, can’t we just say “…, it could be subverted by anyone able to allocate many CPUs”? Is the deciding factor here that it’s expensive to allocate many CPUs?

1 Like

Exactly. The limiting cost is cpu (or gpu ir asiics) cycles. The constraining cost is typically the electricity costs for powering the miners which gives bitcoin true sybil protection. Cloud mining (via AWS) hasn’t been profitable for some time due to the advent of specialized Bitcoin mining chips created by companies like Bitmain

When mining, you are given a set of data to perform hashes on. As you noted, this is done by manipulating the nonce and hashing each time the nonce changes. What you really want to do is check all possible nonces. Even if you’ve already found a “golden nonce” (one which gives you a hash starting with 32 zeros), you need to keep searching for more. There could be anywhere between 0 and 2^32 solutions to a given block of work, so it is in your best interest to keep looking for more. Hence, there are no stop conditions in the sense of when to stop running your algorithm, other than having exhausted all possible nonces (at which point, you would get more work).

1 Like

Well I totally agree with the concerns, and I think these could be the reasons or solutions to the problem:

  • Transactions per sec will increase to new heights.
  • Considering the case of loosening interest will also affect the difficulty status to hunt block, decreasing difficulty will ultimately get back the nodes.
  • Also, new rules are released to maintain the chain, maybe according to time, a suitable solution will be released.
1 Like

Is Bitcoin a Trustless System?

An online electronic payment system requires trusted third parties. And there are numerous problems associated with it. The one that dominates this most is the control over the system by the third party. The solution proposed by the bitcoin paper removes any concern of trust to the third party. Whereas a group of “Unknown honest people” all over the world act as a mediator for proof of work.

After reading through the Bitcoin white paper and the supplemental explanation it’s pretty amazing how Satoshi accounted for so many edge cases when coming up with the system. I found how the proof of work difficulty compensates for increases in hardware speed pretty interesting.

To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they’re generated too fast, the difficulty increases.

It was also cool to view the code that implements this functionality (linked to in the explanation posts). Does anyone know where the desired average mining speed of one block every 10 minutes comes from?

1 Like

Its a great question. Heres a potential explanation

Bitcoin tries to maintain its block time to be around 10 minutes with its difficulty algorithm. Why it is 10 minutes? Why not 2 or 20 minutes? The very first reference of having 10 minutes as the bitcoin block time comes from the original research paper, which introduced bitcoin in 2008, by Satoshi Nakamoto. It has only one reference, and 10 minutes is not a concrete suggestion, but takes as an example.

Source: https://medium.facilelogin.com/the-mystery-behind-block-time-63351e35603a

the main challenge in shorter block time is, there will more miners producing the same block, and end up with no economic incentives — and waste a lot of computational power with no impact towards the stability of the network. Further, this will result in more frequent forks.

1 Like