Comment In the same way it has become de rigueur to slag off Facebook for its many privacy sins while billions still dump their data into the service, it's also pretty trendy to pretend that blockchain, a digital ledger that records transactions publicly and permanently, offers answers to a new and improved decentralised web that leaves individuals, not Facebook, in charge of their data.
It is complete and utter rubbish.
It's not that a blockchain-based web isn't possible. After all, the original web was decentralised, too, and came with the privacy guarantees that blockchain-based options today purport to deliver. No, the problem is people.
As user interface designer Brennan Novak details , though the blockchain may solve the crypto crowd's privacy goals, it fails to offer something as secure and easy as a (yes) Facebook or Google login : "The problem exists somewhere between the barrier to entry (user-interface design, technical difficulty to set up, and overall user experience) versus the perceived value of the tool, as seen by Joe Public and Joe Amateur Techie."
Save us from ourselves, God Blockchain! The early web was chaotic. Decentralised, yes, bu t chaotic. Finding one's way to new services was virtually impossible, leading to directory services (I distinctly remember buying a "yellow pages" index of all the known sites from a bookstore) and, eventually, to Google indexing the web for us so that we could use search to navigate around.
At the same time, services like Compuserve arose to give average humans a way to use the web, and services thereon. Compuserve eventually gave way to Facebook, Twitter, and (again) Google – a small corpus of companies that control much of what we see and do on the internet and make it straightforward for governments to keep a tight leash on what we do online, as Edward Snowden laid bare.
In the wake of this shockingly centralisation that now plagues our privacy and constrains choice, web revolutionaries like Tim Berners-Lee are conspiring to decentralise the web. Again.
Blockchain, in particular, has been put forward as the future of everything, from farmers selling eggs to publishers selling news. Instead of relying on servers, centrally managed blockchain depends upon "a peer-to-peer network built on a community of users", Adam Rowe writes . In this world, the "internet-connected devices would host the internet, not a group of more high-powered servers". This architecture theoretically makes the internet harder to hack or control since a website or one's data is strewn across a number of different devices.
It also shifts responsibility for the system to those individual nodes. To us, as it were. And that's where the problems start.
Technology hurdles to overcome As much as blockchain advocates want us to believe it's our decentralised saviour, there are serious technical impediments. As Bluzelle Networks CEO Pavel Bains styles it: "The immutability of the data on the blockchain is one of the key dynamics at play here. While never removing or changing data has many positive aspects to it, it also poses a serious issue when you consider how it works within important regulatory frameworks."
That's the point! I hear you scream, but there is the world we live in, and the world blockchain revolutionaries want. I'm not sure the latter is demonstrably better.
Take GDPR, for example, the European Union's new rule on data protection. Under GDPR, an individual can ask that you remove their data from your systems. There's been much discussion, however, about how that would be impossible in blockchain. According to the theory , you cannot remove data – at least not easily – because the nature of the beast is such that follow-on transactions might rely on it. Remove the data and you create a blockchain-defeating fork.
A potential problem here comes in the way the blockchain stores data, which is seen as a virtue by privacy advocates, but that not only puts a crimp on corporate responsibility but could also introduce all sorts of inefficiencies. As Bains goes on: "In new decentralized file storage services [files are often]... broken up into chunks with the divisions made at arbitrary locations, demonstrating little regard for the data in the file.
"Trying to access data when the underlying storage mechanism does not understand the nature of it is inefficient and likely to be error prone. The reality is that, to read a simple mailing address from a relatively modest 10GB file on a storage service like IPFS would require the entire file to be downloaded and then searched for the relevant information. Even at download speeds of 1GB per second, it would take 80 seconds every time the file is accessed."
Safe? Probably. Annoying? Absolutely.
In a paper (PDF) written primarily by Stanford researchers, other, related issues are held out:
An architecture without a single point of data aggregation, management and control has several technical disadvantages. First is functionality: there are...