> A deeply and clearly thought out argument for why immutable blockchains and code as law are terrible ideas, yet the conclusion is that crypto-anarchist technologies are a net good.
I think there's an important distinction between things like bitcoin and the technologies described in the article. The technologies mentioned in the article all take some sort of action without ongoing human input.
Things like bitcoin (and other crypto-anarchist technologies), on the other hand, preserve human agency. I mean, there's a lot of automation on the mining side of bitcoin, but the currency side allows trades to be initiated by human beings. Bitcoin preserves human agency so well that people using it to commit crimes is one of the arguments against it.
Now, you could argue that what I'm describing is a bad thing, or that the tradeoffs are unfavorable, or what have you. And there are several reasonable arguments in this area. But I think the distinction between technology that responds to human input and does work for us and technology that runs autonomously and makes its own decisions based on rules set up at a prior date is important here.
>Things like bitcoin (and other crypto-anarchist technologies), on the other hand, preserve human agency.
You have a point about simple transactions, to some extent they just do what you tell them, but etherium is the poster child for crypto-anarchism. Code is law is literally the rallying cry of these systems, because taking out trust in humans and institutions is the whole point.
Even many of the stablecoins are supposedly beyond trust because they're based on financial instruments that "guarantee" their stability. Of course that's not turning out very well. If these systems are subject to human oversight, then they're open to exactly the problems of governance, institutional control and politics as anything else. Crypto-anarchism is about automating all of that away, but that puts implacable emotionless code in the driving seat, and the article explains very eloquently why that's a bad thing.
> but that puts implacable emotionless code in the driving seat, and the article explains very eloquently why that's a bad thing.
To re-iterate my previous point, putting code in the driver's seat would imply that something like the etherium network decides what transactions to make (or possibly entering into their flavor of smart contracts by itself). I have not heard of any proposals for this.
That's literally what a smart contract is. From the Etherium website:
"Smart contracts are a type of Ethereum account. This means they have a balance and they can send transactions over the network. However they're not controlled by a user, instead they are deployed to the network and run as programmed. User accounts can then interact with a smart contract by submitting transactions that execute a function defined on the smart contract. Smart contracts can define rules, like a regular contract, and automatically enforce them via the code. Smart contracts cannot be deleted by default, and interactions with them are irreversible."
So if they can send transactions over the network, and are not controlled by a user, how is that not putting code in the driver's seat?
> how is that not putting code in the driver's seat?
Because humans enter into them.
What you're describing is not a problem of the code making decisions for people, it's a problem of people not being able to back out of something once they've made the decision.
I think there's definitely an argument around if this is worth it or not, but it's a different issue. An issue that's akin to many decisions in real life. To take a dramatic example, this is akin to firing a gun. Once fired, a bullet cannot be taken back, but the bullet is not in control, but it is a situation created entirely by human agency.
I suppose the devs who create the software we're talking about would have some agency in the previous scenarios. But I wouldn't be nearly as concerned if the people in, say, the train car were entirely the devs that wrote the train car door automation software.
I guess the distinction I'm trying to make is people being forced into a situation vs people voluntarily entering a situation (even if they might not be happy with the situation later). In the former situation, I'd describe that as people not being in control, in the latter I'd describe the people involved as being in control.
I think there's an important distinction between things like bitcoin and the technologies described in the article. The technologies mentioned in the article all take some sort of action without ongoing human input.
Things like bitcoin (and other crypto-anarchist technologies), on the other hand, preserve human agency. I mean, there's a lot of automation on the mining side of bitcoin, but the currency side allows trades to be initiated by human beings. Bitcoin preserves human agency so well that people using it to commit crimes is one of the arguments against it.
Now, you could argue that what I'm describing is a bad thing, or that the tradeoffs are unfavorable, or what have you. And there are several reasonable arguments in this area. But I think the distinction between technology that responds to human input and does work for us and technology that runs autonomously and makes its own decisions based on rules set up at a prior date is important here.