Eliezer Yudkowsky @ESYudkowsky If you’re not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders. Jan 3, 2024 · 7:29 PM UTC
An AI reads the entire legal code – which no human can know or obey – and threatens to enforce it, via police reports and lawsuits, against anyone who doesn’t comply with its orders.
what. eliezer what in the fuck are you talking about? this is the same logic that sovereign citizens use to pretend the law and courts are bound by magic spells that can be undone if you know the right words
Well, if you think that’s a dumb scenario, by all means go back to worrying about the utter extinction of humanity!
no thanks? like, I’m seriously having trouble understanding what yud’s even going for here. “if you think this utter bullshit I made up on the spot is stupid, please return to the older bullshit I’ve been feeding you?”
That makes it significantly less threatening
I mean, to you and me, yes, but there’s lakes and seas of people in the world who think that superintelligences are only allowed to attack them in small, survivable ways that they understand.
the problem isn’t that I’ve said something that doesn’t even work on a surface level, it’s that people aren’t impressed when I ramble about extraordinarily unlikely nonsense anymore
is yud ok? I feel like this is incoherent and shallow even by his standards
Maybe he’s having an Interaction with The Law and finding out that it isn’t in fact some perfectly rational sphere of uniform distribution but is in fact made of (gasp, horror, revulsion) human experience
He strikes me as exactly the kind of person that’d vaguepost tangentially instead of saying “hmm well fuck, I’m getting sued”. At least until waaaaaay down the line
(this is conjecture, of course, just to be clear)
lakes and seas of people
clearly the AI is going to hug us all and then we turn into TANG
Ah, evangelion IS a documentary after all
What could be more intimidating or fearsome than a sovcit?
– A sovcit, probably
“[ignoring all other scary prospects like irreversible climate change or a third world war etc.] consider this scarier prospect: An AI” - AI doomers in a nutshell
Trying to stoke fear of bureaucracy is classic annoying libertarian huckster AKA yud energy
If you’re not worried about the utter extinction of humanity, consider this scarier prospect: An AI reads the entirety of AO3, which no human can comprehend, and threatens to leave scathing comments on your self-insert fic
Zapped from AI orbit for jaywalking.
Stop jaywalking!
You have 15 seconds to comply!
10 seconds!
5 seconds!
Sidewalk turned into a smoking crater
I’d buy that for a dollar!
Coincidence than this fear occurs the same day the Epstein list is released?
So which big name TREACLES are gonna be on it?
Epstein did donate some money to SIAI, not sure if it was before or after his first conviction
EDIT: Rob Bensinger says it was seed funding for OpenCog that SIAI was collecting, and that they turned him down in 2016 https://www.lesswrong.com/posts/3JjKWWrKWJ8nysD9r/question-about-a-past-donor-to-miri?commentId=i49RZQgoQZYrXdpis
None, unless dead old Marvin Minsky had his head frozen and that counts somehow.
he fucking would
Not to discount the possibility, but it might be none/few. I think a lot of them are too new-money / too-fringe-when-epstein-was-applicable to have been a part of that orbit
When all you have is computer code, all mentions of code look like computer code. (see DNA, and now the law).
Anyway, the law isn’t a video game, you cannot just go ‘negative objection!’ and cause an underflow in objections.
(An intelligent AGI would prob understand this, and if it doesnt it prob just sucks (and is more AI than AGI) and lawyers/judges would object. (I know for a fact that people in law have been thinking about subjects like this (automatization of the law) for 25 years at least. I have no idea where the discussions went but they prob have a lot higher quality than Yudkowskys writings about it, so I suggest anybody interested to try and contact the law profs of a local university).
this sounds net good because then we will simplify the law to one that makes sense and not one where literally everyone is a criminal
if humanity was capable of doing that we’d have done it already
AAAA (I also wonder about Godel here)
E: I also note that Yud and most of the thread have now given up on calling AGI AGI and are just calling it AI. Another point scored for learning to reason better using Rationalism. Vaguely related link (I only mention it here because I liked the term Epistemic Injustice and this is about our current AI innovation wave).
@Shitgenstein1 did he just watch Robocop?
instead of utter destruction of humanity, consider this scarier prospect: me needing to get a real job
“Well, here we are facing the utter extinction of humanity but at least we don’t have to pay taxes or wear seatbelts”.
Both this new dumb shit and the extinction risk are predicated on the concept of omnipotent AI, which he just takes as a given. Now with just an added layer of dumb. Oh no, the God AI will not kill me outright, just subject me to inscrutable matrices of bureaucracy!
Consider this, Yud m’lud: what if a dog had a square ass
Xitter share and like numbers seem to be smaller and smaller lately.