The proof is in the putting — namely the whitepaper’s proof of work putting.

A white paper isn’t a complete document, it’s meant to be a loose set of instructions which describe a concept. There are many things in the white paper which are not described perfectly. A perfect example is this phrase in Proof of Work:

“To compensate for increasing hardware speed and varying interest in running nodes over time, the proof-of-work difficulty is determined by a moving average targeting an average number of blocks per hour. If they’re generated too fast, the difficulty increases.”

Why is he worried about blocks being generated too fast? Why did he set a target for 10 minutes? ETH is every 15 seconds for instance, why the ginormous difference (15 vs 600 seconds)? Well, in the whitepaper he never goes into much detail; he just gives you a typical “Craig response” which is to give you enough information to understand the system but not where it’s going.

Difficulty: What people are tremendously missing here, is scale. Craig has always been thinking at scale, as an inventor would. The inventors of “atom splitting” were immediately worried about it’s use in making bombs, but if you were to examine the amount of energy produced from a split uranium atom your first concern wouldn’t likely be bombs. But INVENTORS see the present more clearly, and therefore see the future as they imagine how their invention will affect society. Similarly, the difficulty adjustment is mostly about making a NETWORK (the difference between Craig and most coder-campers is Craig knows networking, not just coding) more efficient — and I mean that from an energy-usage standpoint. What good is a giant perfect network if the network itself eats all the value out of the money it guards? This isn’t an uncommon problem, right? Mints have always had to take into consideration the fact that making a nickel takes energy, and that energy must be less than a nickel or else the nickel is worthless, right? So what about BitCoin? In a 100–200 Byte trasaction which needs to represent the smallest values possible (again, why Craig choose EIGHT decimal places for a commodity “coin” which had ZERO value in 2009? That’s a bit AGGRESSIVE, no?) the machines which create and process those values need to b energy efficient, yes? How? Difficulty factor. If you imagine a node as a large circle split two ways, one part is working on verifying transactions while the other is working on Proof of Work (the puzzle). But those two parts can be different sizes: when the puzzle is hard, the transaction verification is likely easy. When the puzzle is easy, transaction verification is likely hard. Well what about at scale? What does that division look like? At SCALE, think a terabyte or more of block size, a node would be “working like sixty” (to borrow an old expression) just to keep up with transaction verification, right? So the cost of adding-on puzzle-solving capability would be costly. HOWEVER, this problem would be the same for all operational nodes, they’d all have the same problem. Thus, if capital were tight (like in a depression) the nodes could save money, focus on transaction verficiation, while the puzzles get easier. When capital is loose, the nodes could invest capital in more node machinery (cpu/hashing) to increase their puzzle-solving capability. That’s a perfect beautiful system, yes? Profits would go up due to easy puzzles and the smart node will invest profits back into the system to prepare for the coming “difficulty winter”. But what does THAT do? It builds the network AHEAD, always AHEAD, of need. Puzzle difficulty going up causes the network to improve without transactions going up, it forces the issue. THIS IS MACHINE LEARNING AT ITS FINEST. The machine is anticipating the next cycle, getting it’s network prepared for the next larger, higher human population, gangrush, but without sacrificing security.

The whitepaper is mostly concerned with security, and this is a must for a money, right? This is why every word is devoted to discussing and somewhat proving that the system will be secure in all conditions. The whitepaper doesn’t at any spot gaurantee perfect accuracy; what it does is establish a system that can fix itself. Again: artificial intelligence. BitCoin is organic in that it can regrow limbs, literally. So sure, it prunes, or CAN prune. But nowhere in the whitepaper does it say a node MUST prune. Nodes can do whatever they want with all their peacocky-display of extravagently long feathers. What it can DO, is incorporate other computation into it’s business model, and that’s where terranode and “blacknet” and other “cloud-like” features enter BitCoin. Because, if you’re going to build a worldwide computer this big, with automatic back-up capabilities (you only need THREE massively large nodes at scale to form a decent secure network where the 3 hold each other in check — and Craig has said this directly), why not have it do some OTHER useful stuff? EC2? SSS? (S3). These services weren’t broken out by Amazon immediately either, they were natural outcome of selling others their massive extra server space. Let’s not forget Craig is a security and network engineer in addition to just knowing how to code (and don’t let him fool you when he talks about what a shit programmer he is, Craig is an OLD coder, and old coders learn things like assembly language — not how to make a hot dog dance on an augmented reality screen), so he would have been paying close attention to AWS. The timelines work out nicely, eh?

From wikipedia on AWS:

“Then in late 2003, the AWS concept was publicly reformulated when Chris Pinkham and Benjamin Black presented a paper describing a vision for Amazon’s retail computing infrastructure that was completely standardized, completely automated, and would rely extensively on web services for services such as storage and would draw on internal work already underway. Near the end of their paper, they mentioned the possibility of selling access to virtual servers as a service, proposing the company could generate revenue from the new infrastructure investment.[10] In November 2004, the first AWS service launched for public usage: Simple Queue Service (SQS).[11] Thereafter Pinkham and lead developer Christopher Brown developed the Amazon EC2 service, with a team in Cape Town, South Africa”

We’re talking about a guy who puts bibliographical references and footnotes into just about everything he does. Do you think he wasn’t aware of what Amazon Web Services was doing with it’s EXTRA computing power in 2007, 2008, and 2009? This is a guy historically VERY interested in networks, money systems, and large server farms. I didn’t do any of those technical things, but since I studied Amazon from a financial perspective and I knew all about AWS, certainly well before 2008. Google made big headlines before BitCoin was completed in their pursuit of a cloud business. My point is, you didn’t have to be very in tune with the latest news to know about all this stuff and understand the concepts involved. This stuff was page 1 news for those reading financial headlines, much less tech headlines. Craig is a polymath, and would’ve surely understood the implications of ALL of this cloud computing.

So to cut this short, there’s LOTS not in the white paper, but that doesn’t mean it wasn’t on the brain of the inventor. While splitting atoms, I don’t think anyone first wrote about it from the perspective of a 1945 bomb dropped on Japan, but they certainly understood what the potenial was very early. This example would have a reverse incentive baked in, but BitCoin using it’s extra computing power (a buck’s horns) for good would have a huge multiplier-effect on the value of BitCoin network.

Plus, I’m willing to bet Craig can talk a great deal about a dude like Wolfram who’s been talking about the power of large computing machines for a long time. But how do you build a massive computing machine with expensive capital that can pay for itself? Hmm, well, you could attach it to money, as Visa/MC are pretty big computing networks. But what’s bigger? The internet, the cloud.

notes:

Einstein laid out mass-energy equivalence in 1905

First nuclear reactor 1942

He also had a light hand in the atom bomb circa 1939- 1945

AWS was concieved as early as 2002, released as a product mere years later.

Google joined the Cloud fight officially in 2008

BitCoin was concieved before 2008 and launched mostly in the beginning of 2009.

Recommends the BEST equities (“Diamonds”) WHEN they are (“in the Roughage”) at the lowest price to achieve the highest long term gains.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store