Designing for success

I just listened to Emin Gün Sirer and Ittay Eyal from Cornell University on the Epicenter Bitcoin podcast.

They’re doing great work; full-scale emulation of the Bitcoin network is a fantastic idea, and I plan on doing a lot of testing and optimizations using the tools they’ve developed. I also plan on writing about their Bitcoin NG idea… but not right now.

Listening to the podcast, and listening to complaints about Bitcoin XT from one of the other Core committers, I realized there’s a fundamental disagreement about protocol design.

The most successful protocols were forward-looking. When the IP protocol was designed in 1970’s, the idea of 4 billion computers connected to a single network was ludicrous. But the designers were forward-looking and used 32-bits for IP addresses, and the protocol grew from a little research project to the global internet that is just now, 40 years later, running out of IP addresses.

I applaud Gün and Ittay for getting scientific about the Bitcoin network, and establishing metrics that can be used to evaluate implementations or proposals. But I think it is too easy to get anchored to the Bitcoin network as it is implemented today, and I don’t think the network as it is implemented right now in the Bitcoin Core reference implementation should dictate high-level protocol design.

I think protocol design should be forward-looking, and protocol design should not be tied to one particular implementation.

I understand the desire to be conservative, and to test at the limits of whatever the protocol allows. One of the criticisms of the BIP101 proposal I’ve heard from some people is “you haven’t tested the network with gigabyte blocks.” I wonder if the IP designers had colleagues who complained “we haven’t tested the network with a billion computers” – and I wonder what protocol we’d be using on the Internet today if the designers of the IP protocol hadn’t been so forward-looking.

I keep hearing that bigger blocks might drive mining centralization, but I wrote about that earlier this year and still haven’t seen a convincing simulation or argument that is true, unless you assume that the current p2p protocol is set in stone and will never be changed.

I’m going to work on a better protocol for broadcasting transactions and blocks across the network, because if we want miners to be willing to create much bigger blocks, a better protocol is needed (we already have one in the form of Matt Corallo’s “fast relay network”, which is a big reason most mining pools are willing to create one-megabyte blocks). But I think it would be a mistake to wait until that work is done to schedule the protocol change to allow bigger blocks, for three reasons:

First, because it takes about six months for any protocol change to get deployed across the network.

Second, because somebody else might have an even better idea than me. With a one megabyte block size limit, there is little incentive to work on optimizing transaction or block propagation– why spend a lot of time writing code that will only be relevant if the maximum block size is raised?

And finally, because miners aren’t stupid. When slush produced a 900+ kilobyte block that forked the chain, the biggest miners immediately agreed to produce smaller blocks until that Bitcoin Core software was fixed.

 
478
Kudos
 
478
Kudos

Now read this

Eleven years ago today…

Eleven years ago today I had my last email exchange with Satoshi; here it is: Subject: alert key Satoshi Nakamoto satoshin@gmx.com 26 Apr 2011, 10:29 I wish you wouldn’t keep talking about me as a mysterious shadowy figure, the press... Continue →