Gavin Andresen

Bitcoin developer. All-around geek.

Page 2


Classic? Unlimited? XT? Core?

Almost two years ago, when I stepped down as lead maintainer for Bitcoin Core, I wrote:

I’m pleased to be able to focus more on protocol-level, cross-implementation issues and less on issues specific to the Bitcoin Core software.

I’d still like to focus on protocol-level, cross-implementation issues but lately I’ve been distracted and have generated a lot of controversy (and hurt feelings) by helping out with some other implementations (first XT, lately Bitcoin Classic, maybe Bitcoin Unlimited soon.

Madness! Chaos! ANARCHY! … I hear some people say, but there is a method to my madness. When I was lead maintainer of Core I had the following top-three priorities:

1) Keep the system secure.
2) Keep the network reliably processing transactions.
3) Eliminate single points of failure.

Those are still my top priorities, but I try to take a higher-level view, looking at the entire Bitcoin

Continue reading →


Segregated Witness is cool

Pieter Wuille gave a fantastic presentation on “Segregated Witness” in Hong Kong. It’s a great idea, and should be rolled into Bitcoin as soon as safely possible. It is the kind of fundamental idea that will have huge benefits in the future. It also needs a better name (“segregated” has all sorts of negative connotations…).

You should watch Pieter’s presentation, but I’ll give a different spin on explaining what it is (I know I often need something explained to me a couple different ways before I really understand it).

So… sending bitcoin into a segregate witness-locked output will look like a weird little beastie in today’s blockchain explorers– it will look like an “anyone can spend” transaction, with a scriptPubKey of:

PUSHDATA [version_byte + validation_script]

Spends of segregated witness-locked outputs will have a completely empty scriptSig.

The reason that is not insane is

Continue reading →


Designing for success

I just listened to Emin Gün Sirer and Ittay Eyal from Cornell University on the Epicenter Bitcoin podcast.

They’re doing great work; full-scale emulation of the Bitcoin network is a fantastic idea, and I plan on doing a lot of testing and optimizations using the tools they’ve developed. I also plan on writing about their Bitcoin NG idea… but not right now.

Listening to the podcast, and listening to complaints about Bitcoin XT from one of the other Core committers, I realized there’s a fundamental disagreement about protocol design.

The most successful protocols were forward-looking. When the IP protocol was designed in 1970’s, the idea of 4 billion computers connected to a single network was ludicrous. But the designers were forward-looking and used 32-bits for IP addresses, and the protocol grew from a little research project to the global internet that is just now, 40 years later

Continue reading →


Big-O scaling

Computer science has this thing called “big-O notation” (that’s O as in “oh”, not zero). It is a way of describing how algorithms behave as they’re given bigger problems to solve.

During the Great Maximum Blocksize Debate, there have been repeated claims that “Bitcoin is not scalable, because it is O(n2).” N-squared scaling is not sustainable; double N, and you need four times the resources (memory or CPU).

At the Montreal Scaling Bitcoin conference, I had a chance to talk to a couple of those people and ask them what the heck they’re talking about. Turns out they’re talking about a few different things.

Some of them are taking Metcalfe’s Law (“the value of a telecommunications network is proportional to the square of the number of connected users of the system (n2)” and applying it to Bitcoin transactions, assuming that if there are N people using Bitcoin they will collectively

Continue reading →


Bigger blocks another way?

Get ten engineers on a mailing list, ask them to solve a big problem, and you’ll probably end up with eleven different solutions.

Even if people agree that the one megabyte block size limit should be raised (and almost everybody does agree that it should be raised at some point), agreeing how is difficult.

I’m not going to try to list all of the proposals for how to increase the size; there are too many of them, and I’d just manage to miss somebody’s favorite (and end up with a wall-of-text blog post that nobody would read). But I will write about one popular family of ideas, and will explain the reasoning behind the twenty-megabyte proposal.

 Dynamic limits

One very popular idea is to implement a dynamic limit, based on historical block sizes.

The details vary: how often should the maximum size be adjusted? Every block? Every difficulty adjustment? How much of an increase should be

Continue reading →


When the block reward goes away…

Discussions of raising the maximum block size often turn into discussions of Bitcoin’s very-long-term future – “but what about when the block reward goes to zero” or “but how, exactly, will it all work if Bitcoin is used by billions of people per day…”

I’m going to upset a lot of engineers, but I think it is a mistake to try to predict what is going to happen that far in the future, and an even bigger mistake to spend a lot of time now worrying about what might happen ten or twenty years from now.

I’m inspired by F.A. Hayek’s “The Fatal Conceit”:

The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.

To the naive mind that can conceive of order only as the product of deliberate arrangement, it may seem absurd that in complex conditions order, and adaptation to the unknown, can be achieved more effectively by

Continue reading →


Analysis Paralysis

I’ve been accused of being too flippant about increasing the block size limit. This series of blog posts is meant to show that I’m not, that I have carefully thought about risks and benefits. I stepped back from the role of lead committer exactly so I would have the time to think about bigger-picture issues like this one. Today I’d like to address, head-on, this argument against changing the one-megabyte blocksize limit:

Larger-than-one-megabyte blocks have had insufficient testing and/or insufficient research into economic implications and/or insufficient security review of the risks versus benefits.

This is tough to respond to– there can always be more testing or research, especially for a security-critical project like Bitcoin. It is easy to suffer from “analysis paralysis,” and I think the Core Bitcoin project has been suffering from analysis paralysis over the block size issue for

Continue reading →


Are bigger blocks better for bigger miners?

After taking a break to help review some pull requests and take a trip to New York, I’m ready to continue tackling objections to increasing the block size:

Bigger blocks give bigger miners an economic advantage

This is a hard blog post to write; the arguments for why bigger blocks give bigger miners an economic advantage over smaller miners are highly technical (whenever I use the term “miner” in this post I really mean “solo miner or mining pool”).

To start: think about what happens when a miner gets lucky and finds a new block. They send it to their peers and immediately start working on finding another block on top of that new block.

Their peers will receive and validate the block, and, assuming validation passes, they then relay the block to their peers and start mining on top of the new block. The original miner is busy working during this whole validate-then-relay process; they

Continue reading →


UTXO uh-oh…

I’m adding entries to my list of objections to raising the maximum block size as I run across them in discussions on reddit, in IRC, etc.

This is the technical objection that I’m most worried about:

More transactions means more memory for the UTXO database

I wasn’t worried about it yesterday, because I hadn’t looked at the trends. I am worried about it today.

But let me back up and explain briefly what the UTXO database is. UTXO is geek-speak for “unspent transaction output.” Unspent transaction outputs are important because fully validating nodes use them to figure out whether or not transactions are valid– all inputs to a transaction must be in the UTXO database for it to be valid. If an input is not in the UTXO database, then either the transaction is trying to double-spend some bitcoins that were already spent or the transaction is trying to spend bitcoins that don’t exist.

Continue reading →


The myth of “not full” blocks

I’m going to take a break from addressing objections to raising the maximum block size because in discussions I’m seeing people claim that “blocks aren’t full” or “we don’t yet have a functioning fee market.”

It is true that not every single block is one-megabyte big. But it is not entirely true that “blocks aren’t full” – many blocks are full, either because miners are deciding to create blocks smaller than one megabyte or because sometimes it takes a long time to find a block. Here is a graph I created (from Jameson Lopp’s excellent statoshi.info website) showing the memory pool– transactions waiting to be confirmed– over time:
Mempool.png

Every peak on that graph corresponds to a block being found, and you can see that many blocks don’t clear out the memory pool.

The biggest triangle on that graph, near the beginning, has a peak of about 4,500 transactions waiting to be confirmed. Near the

Continue reading →