Gavin Andresen

Writings about geeky stuff

Page 4


UTXO uh-oh…

I’m adding entries to my list of objections to raising the maximum block size as I run across them in discussions on reddit, in IRC, etc.

This is the technical objection that I’m most worried about:

More transactions means more memory for the UTXO database

I wasn’t worried about it yesterday, because I hadn’t looked at the trends. I am worried about it today.

But let me back up and explain briefly what the UTXO database is. UTXO is geek-speak for “unspent transaction output.” Unspent transaction outputs are important because fully validating nodes use them to figure out whether or not transactions are valid– all inputs to a transaction must be in the UTXO database for it to be valid. If an input is not in the UTXO database, then either the transaction is trying to double-spend some bitcoins that were already spent or the transaction is trying to spend bitcoins that don’t exist.

...

Continue reading →


The myth of “not full” blocks

I’m going to take a break from addressing objections to raising the maximum block size because in discussions I’m seeing people claim that “blocks aren’t full” or “we don’t yet have a functioning fee market.”

It is true that not every single block is one-megabyte big. But it is not entirely true that “blocks aren’t full” – many blocks are full, either because miners are deciding to create blocks smaller than one megabyte or because sometimes it takes a long time to find a block. Here is a graph I created (from Jameson Lopp’s excellent statoshi.info website) showing the memory pool– transactions waiting to be confirmed– over time:
Mempool.png

Every peak on that graph corresponds to a block being found, and you can see that many blocks don’t clear out the memory pool.

The biggest triangle on that graph, near the beginning, has a peak of about 4,500 transactions waiting to be confirmed. Near the...

Continue reading →


Big blocks and Tor

I think I first heard this objection to a larger maximum block size from Peter Todd, Chief Scientist of ViaCoin:

More transactions makes it more difficult to keep Bitcoin activity private from oppressive governments.

That is hard to argue with– obviously the more you do something, the more likely you will get noticed. But engineering is about tradeoffs, so how much more likely is somebody to get noticed by their oppressive government if they are using a Bitcoin system that supports sixty, rather than three, transactions per second?

Well, if they are just transacting with Bitcoin using a lightweight wallet, there is no difference. Lightweight wallets (meaning any wallet except for Bitcoin Core or Armory) only download transactions relevant to them. They are unaffected by changes to the maximum block size.

But what if you want to be a miner in some oppressive country?

Well, if you...

Continue reading →


Will a 20MB max increase centralization?

Continuing with objections to raising the maximum block size:

More transactions means more bandwidth and CPU and storage cost, and more cost means increased centralization because fewer people will be able to afford that cost.

I can’t argue with the first part of that statement– more transactions will mean more validation cost. But how much? Is it significant enough to worry about?

I’ll use ChunkHost’s current pricing to do some back-of-the-envelope calculations. I’m not affiliated with ChunkHost– I’m using them for this example because they accept Bitcoin and I’ve been a happy customer of theirs for a while now (I spun up some ‘chunks’ to run some 20 megabyte block tests earlier this year).

CPU and storage are cheap these days; one moderately fast CPU can easily keep up with 20 megabytes worth of transactions every ten minutes.

Twenty megabytes downloaded plus twenty megabytes...

Continue reading →


Block size and miner fees… again

I’m still seeing variations on this argument for keeping the one-megabyte block limit:

The network will be more secure with one megabyte blocks, because there will be more competition among transactions, and, therefore, higher transaction fees for miners.

I’ve written before about the economics of the block size (and even went to the trouble of having Real™ Economists review it), but I’ll come at that argument from a different angle.

Miner rewards expressed in BTC are:

block reward + ( number of transactions x average fee )

In the currencies that most miners care about right now (dollars or euros or yuan) rewards are:

(reward + number transactions x average_fee) x exchange rate

The fear is that as the block reward diminishes, either the number of transactions or the average transaction fee will not rise enough to replace the block reward. So miners will drop out, and the network...

Continue reading →


It must be done… but is not a panacea

There are two related arguments I hear against raising the maximum block size:

There is no need to raise the maximum block size because the Lightning Network / Sidechains / Impulse / Factom solves the scaling problem.

If the maximum block size is raised, there will be little incentive for other scaling solutions to emerge

The first is easy to address: if any of the other proposed scaling solutions were tested, practical and already rolling out onto the network and being supported by all the various Bitcoin wallets then, indeed, there would be no hurry to schedule a maximum block size increase.

Unfortunately, they’re not; read Mike Hearn’s blog post for details. We are years away from a time when we can confidently tell a wallet developer “use this solution to give your users very-high-volume, very-low-cost, very-low-minimum-payment instant transactions.”

The second is also easy to...

Continue reading →


Why increasing the max block size is urgent

Perhaps the most common objection I hear to raising the maximum block size from one megabyte is that “blocks aren’t full yet, so we don’t have to do anything (yet).”

It is true that we’re not yet at the hard-coded one megabyte block size limit; on average, blocks are 30-40% full today.

There is a very good blog post by David Hudson at hashingit.com analyzing what will happen on the network as we approach 100% full blocks. Please visit that link for full details, but basically he points out there is a mismatch between when transactions are created and when blocks are found– and that mismatch means very bad things start to happen on the network as the one megabyte limit is reached.

Transactions are created steadily over time, as people spend their bitcoin. There are daily and weekly cycles in transaction volume, but over any ten-minute period the number of transactions will be roughly...

Continue reading →


Time to roll out bigger blocks

I was planning to submit a pull request to the 0.11 release of Bitcoin Core that will allow miners to create blocks bigger than one megabyte, starting a little less than a year from now. But this process of peer review turned up a technical issue that needs to get addressed, and I don’t think it can be fixed in time for the first 0.11 release.

I will be writing a series of blog posts, each addressing one argument against raising the maximum block size, or against scheduling a raise right now. These are the objections I plan on writing about; please send me an email (gavinandresen@gmail.com) if I am missing any arguments for why one megabyte is the best size for Bitcoin blocks over the next few years.

  • “There is no need to raise the maximum block size right now, the average block size is only about 400,000 bytes.”
  • “There is no need to raise the maximum block size because the Lightning...

Continue reading →