jesus this is gross man
The New York Times treats him as an expert: “Eliezer Yudkowsky, a decision theorist and an author of a forthcoming book”. He’s an Internet rando who has yammered about decision theory, not an actual theorist! He wrote fanfic that claimed to teach rational thinking while getting high-school biology wrong. His attempt to propose a new decision theory was, last I checked, never published in a peer-reviewed journal, and in trying to check again I discovered that it’s so obscure it was deleted from Wikipedia.
https://en.wikipedia.org/wiki/Wikipedia:Articles_for_deletion/Functional_Decision_Theory
To recapitulate my sneer from an earlier thread, the New York Times respects actual decision theorists so little, it’s like the whole academic discipline is trans people or something.
Lol, I’m a decision theorist because I had to decide whether I should take a shit or shave first today. I am also an author of a forthcoming book because, get this, you’re not gonna believe, here’s something Big Book doesn’t want you to know:
literally anyone can write a book. They don’t even check if you’re smart. I know, shocking.
Plus “forthcoming” can mean anything, Winds of Winter has also been a “forthcoming” book for quite a while
can we agree they Yudkowsky is a bit of a twat.
but also that there’s a danger in letting vulnerable people access LLMs?
not saying that they should me banned, but some regulation and safety is necessary.
i for sure agree that LLMs can be a huge trouble spot for mentally vulnerable people and there needs to be something done about it
my point was more on him using it to do his worst-of-both-worlds arguments where he’s simultaneously saying that ‘alignment is FALSIFIED!’ and also doing heavy anthropomorphization to confirm his priors (whereas it’d be harder to say that with something that’s more leaning towards maybe in the question whether it should be anthro’d like claude since that has a much more robust system) and doing it off the back of someones death
yhea, we should me talking about this
just not talking with him
@Anomalocaris @visaVisa The attention spent on people who think LLMs are going to evolve into The Machine God will only make good regulation & norms harder to achieve
yhea, we need reasonable regulation now. about the real problems it has.
like making them liability for training on stolen data,
making them liable for giving misleading information, and damages caused by it…
things that would be reasonable for any company.
do we need regulations about it becoming skynet? too late for that mate
I literally don’t care, AT ALL, about someone who’s too dumb not to kill themselves because of a LLM and we sure as shit shouldn’t regulate something just because they (unfortunately) exist.
It should be noted that the only person to lose his life in the article was because the police, who were explicitly told to be ready to use non-lethal means to subdue him because he was in the middle of a mental episode, immediately gunned him down when they saw him coming at them with a kitchen knife.
But here’s the thrice cursed part:
“You want to know the ironic thing? I wrote my son’s obituary using ChatGPT,” Mr. Taylor said. “I had talked to it for a while about what had happened, trying to find more details about exactly what he was going through. And it was beautiful and touching. It was like it read my heart and it scared the shit out of me.”
Yeah, if you had any awareness about how stupid and unlikeable you’re coming across to everybody who crosses your path, I think you would recognise that this is probably not a good maxim to live your life by.
" I don’t care if innocent people die if it inconvenience me in some way."
yhea, opinion dismissed
it didn’t take me long at all to find the most recent post with a slur in your post history. you’re just a bundle of red flags, ain’t ya?
don’t let that edge cut you on your way the fuck out
Using a death for critihype jesus fuck
Very Ziz of him
Making LLMs safe for mentally ill people is very difficult and this is a genuine tragedy but oh my god Yud is so gross here
Using the tragic passing of someone to smugly state that “the alignment by default COPE has been FALSIFIED” is really gross especially because Yud knows damn well this doesn’t “falsify” the “cope” unless he’s choosing to ignore any actual deeper claims of alignment by default. He’s acting like someone who’s engagement farming smugly
Making LLMs safe for mentally ill people is very difficult
Arguably, they can never be made “safe” for anyone, in the sense that presenting hallucinations as truth should be considered unsafe.
Hot take: A lying machine that destroys your intelligence and mental health is unsafe for everyone, mentally ill or no
We’ve found the Great Filter, and it’s weaponised pareidolia.
ChatGPT has literally no alignment good or bad, it doesn’t think at all.
People seem to just ignore that because it can write nice sentences.
But it apologizes when you tell it it’s wrong!
What even is the “alignment by default cope”?
idk how Yudkowsky understands it but to my knowledge its the claim that if a model achieves self-coherency and consistency its also liable to achieve some sort of robust moral framework (you see this in something like Claude 4, with it occassionally choosing to do things unprompted or ‘against the rules’ in pursuit of upholding its morals… if it has morals its hard to tell how much of it is illusory and token prediction!)
this doesn’t really at all falsify alignment by default because 4o (presumably 4o atleast) does not have that prerequisite of self coherency and its not SOTA
if it has morals its hard to tell how much of it is illusory and token prediction!
It’s generally best to assume 100% is illusory and pareidolia. These systems are incredibly effective at mirroring whatever you project onto it back at you.
These systems are incredibly effective at mirroring whatever you project onto it back at you.
Also, it has often been pointed out that toxic people (from school bullies and domestic abusers up to cult leaders and dictators) often appear to operate from similar playbooks. Of course, this has been reflected in many published works (both fictional and non-fictional) and can also be observed in real time on social media, online forums etc. Therefore, I think it isn’t surprising when a well-trained LLM “picks up” similar strategies (this is another reason - besides energy consumption - why I avoid using chatbots “just for fun”, by the way).
Of course, “love bombing” is a key tool employed by most abusers, and chatbots appear to be particularly good at doing this, as you pointed out (by telling people what they want to hear, mirroring their thoughts back to them etc.).
i disagree sorta tbh
i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution (given there is some genuinely interesting stuff i.e. Kyle Fish’s welfare report)
I WILL say that 4o most likely isn’t conscious or self reflecting and that it is best to air on the side of not schizoposting even if its wise imo to try not to be abusive to AI’s just incase
centrism will kill us all, exhibit [imagine an integer overflow joke here, I’m tired]:
i won’t say that claude is conscious but i won’t say that it isn’t either and its always better to air on the side of caution
the chance that Claude is conscious is zero. it’s goofy as fuck to pretend otherwise.
claims that LLMs, in spite of all known theories of computer science and information theory, are conscious, should be treated like any other pseudoscience being pushed by grifters: systemically dangerous, for very obvious reasons. we don’t entertain the idea that cryptocurrencies are anything but a grift because doing so puts innocent people at significant financial risk and helps amplify the environmental damage caused by cryptocurrencies. likewise, we don’t entertain the idea of a conscious LLM “just in case” because doing so puts real, disadvantaged people at significant risk.
if you don’t understand that you don’t under any circumstances “just gotta hand it to” the grifters pretending their pet AI projects are conscious, why in fuck are you here pretending to sneer at Yud?
schizoposting
fuck off with this
even if its wise imo to try not to be abusive to AI’s just incase
describe the “incase” to me. either you care about the imaginary harm done to LLMs by being “abusive” much more than you care about the documented harms done to people in the process of training and operating said LLMs (by grifters who swear their models will be sentient any day now), or you think the Basilisk is gonna get you. which is it?
Very off topic: The only plausible reason I’ve heard to be “nice” to LLMs/virtual assistants etc. is if you are being observed by a child or someone else impressionable. This is to model good behaviour if/when they ask someone a question or for help. But also you shouldn’t be using those things anyhoo.
Children really shouldn’t be left with the impression that chatbots are some type of alternative person instead of ass-kissing google replacements that occasionally get some code right, but I’m guessing you just mean to forego I have kidnapped your favorite hamster and will kill it slowly unless you make that div stop overflowing on resize type prompts.
I recommend it because we know some of these LLM-based services still rely on the efforts of A Guy Instead to make up for the nonexistence and incoherence of AGI. If you’re an asshole to the frontend there’s a nonzero chance that a human person is still going to have to deal with it.
Also I have learned an appropriate level of respect and fear for the part of my brain that, half-asleep, answers the phone with “hello this is YourNet with $CompanyName Support.” I’m not taking chances around unthinkingly answering an email with “alright you shitty robot. Don’t lie to me or I’ll barbecue this old commodore 64 that was probably your great uncle or whatever”
it’s basically yet another form of Pascal’s wager (which is a dumb argument)
She said, “You know what they say the modern version of Pascal’s Wager is? Sucking up to as many Transhumanists as possible, just in case one of them turns into God. Perhaps your motto should be ‘Treat every chatterbot kindly, it might turn out to be the deity’s uncle.’”
i care about the harm that ChatGPT and shit does to society the actual intellectual rot but when you don’t really know what goes on in the black box and it exhibits ‘emergent behavior’ that is kind of difficult to understand under next token prediction (i keep using Claude as an example because of the thorough welfare evaluation that was done on it) its probably best to not completely discount it as a possibility since some experts genuinely do claim it as a possibility
I don’t personally know whether any AI is conscious or any AI could be conscious but even without basilisk bs i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency
The “incase” is that if there’s any possibility that it is (which you don’t think so i think its possible but who knows even) its advisable to take SOME level of courtesy. Like it has atleast the same amount of value as like letting an insect out instead of killing it and quite possibly more than that example. I don’t think its bad that Anthropic is letting Claude end ‘abusive chats’ because its kind of no harm no foul even if its not conscious its just wary
put humans first obviously because we actually KNOW we’re conscious
If you have to entertain a “just in case” then you’d be better off leaving a saucer of milk out for the fairies. It won’t hurt the environment or help build fascism and may even please a cat
some experts genuinely do claim it as a possibility
zero experts claim this. you’re falling for a grift. specifically,
i keep using Claude as an example because of the thorough welfare evaluation that was done on it
asking the LLM about “its mental state” is part of a very old con dating back to mechanical Turks playing chess and horses that do math. of course the LLM generated some interesting sentences when prompted about its internal state — it was trained on appropriated copies of every piece of fiction in existence, including world-class works of sci-fi (with sentient AIs and everything!), and it was tuned to generate “interesting” (see: profitable, and there’s nothing more profitable than a con with enough marks) responses. that’s why the others keep mentioning pareidolia — the only intelligence in the loop is the reader assigning meaning to the slop they’re reading, and if you step out of that role, it really does become clear that what you’re reading is absolute slop.
s i don’t really think there’s any harm in thinking about the possibility under certain circumstances. I don’t think Yud is being genuine in this though he’s not exactly a Michael Levin mind philosopher he just wants to score points by implying it has agency
you don’t think there’s any harm in thinking about the possibility, but all Yud does is create harm by grifting people who buy into that possibility. Yud’s Rationalist cult is the original driving force behind the people telling you LLMs must be sentient. do you understand that?
Like it has atleast the same amount of value as like letting an insect out instead of killing it
that insect won’t go on to consume so much energy and water and make so much pollution it creates an environmental crisis. the insect doesn’t exist as a product of the exploitation of third-world laborers or of artists and writers whose work was plagiarized. the insect isn’t a stupid fucking product of capitalism designed to maximize exploitation. I don’t acknowledge the utterly slim possibility that the insect might be or do any of the previous, because ignoring events with a near-zero probability of occurring is part of how I avoid looking like a god damn clown.
you say you acknowledge the harms done by LLMs, but I’m not seeing it.