Silicon Valley's new ideological faction, called Effective Accelerationism or e/acc, is focused on the pursuit of AI development with no guardrails to slow its growth.
I’m not a proponent of this mindset but this seems like an obvious mischaracterization of the argument
My biggest issues is that it seems to exist only in direct response to “doomers” as they love to say. And are maybe right to criticize, but having the whole thing just being a counter extreme doesn’t work either. And there’s lot of hand waving about technology and history and markets correcting themselves.
But I’ve never gotten the impression that it’s just a cynical “I don’t care if AI fucks everyone as long as I make money.”
So basically, effective accelerationism says that if you want to slow down tech companies literally at all, they’re going to call you a murderer because you’re speculatively preventing them from maybe saving lives in the future, even though they likely could still save lives in the future if they were allowed to develop the technology within a sensible regulatory framework, and also they are likely going to get tons of people killed along the way with selfish, shortsighted business practices/models that externalize risks/costs while maximizing profits.
I don’t think it’s necessarily true that if we listen to “doomers” we get sensible policy. And it’s probably more likely we get regulatory capture.
But there does exist a sensible middle ground.
I actually think they are correct to bring up the potential upside as something we should consider more in the moral calculus. But the of course it’s taken to a silly extreme.
I’m not a proponent of this mindset but this seems like an obvious mischaracterization of the argument
My biggest issues is that it seems to exist only in direct response to “doomers” as they love to say. And are maybe right to criticize, but having the whole thing just being a counter extreme doesn’t work either. And there’s lot of hand waving about technology and history and markets correcting themselves.
But I’ve never gotten the impression that it’s just a cynical “I don’t care if AI fucks everyone as long as I make money.”
It’s about the policy. If policymakers listen to so-called “doomers” then we’ll have AI regulations and possibly sensible discussions about how to regulate AI. It won’t just be a wild west where anything goes and AI corporations can make unthinkable amounts of money. Marc Andreessen, a prominent effective accelerationist, holds that “any deceleration of AI will cost lives. Deaths that were preventable by the AI that was prevented from existing is a form of murder.”
So basically, effective accelerationism says that if you want to slow down tech companies literally at all, they’re going to call you a murderer because you’re speculatively preventing them from maybe saving lives in the future, even though they likely could still save lives in the future if they were allowed to develop the technology within a sensible regulatory framework, and also they are likely going to get tons of people killed along the way with selfish, shortsighted business practices/models that externalize risks/costs while maximizing profits.
I don’t think it’s necessarily true that if we listen to “doomers” we get sensible policy. And it’s probably more likely we get regulatory capture.
But there does exist a sensible middle ground.
I actually think they are correct to bring up the potential upside as something we should consider more in the moral calculus. But the of course it’s taken to a silly extreme.