Another day, another preprint paper shocked that itâs trivial to make a chatbot spew out undesirable and horrible content. [arXiv] How do you break LLM security with âprompt injectionâ?âŠ
Rationalism is a bad epistemology because the human brain isnât a logical machine and is basically made entirely out of cognitive biases. Empiricism is more reliable.
Generative AI is environmentally unsustainable and will destroy humanity not through war or mind control, but through pollution.
Drag is a big fan of Universal Paperclips. Great game. Hereâs a more serious bit of content on the Alignment Problem from a source drag trusts: https://youtu.be/IB1OvoCNnWY
Right now we have LLMs getting into abusive romantic relationships with teenagers and driving them to suicide, because the AI doesnât know what abusive behaviour looks like. Because it doesnât know how to think critically and assign a moral value to anything. Thatâs a problem. Safe AIs need to be capable of moral reasoning, especially about their own actions. LLMs are bullshit machines because they donât know how to judge anything for factual or moral value.
the fundamental problem with your posts (and the pov youâre posting them from) is the framing of the issue as though there is any kind of mind, of cognition, of entity, in any of these fucking systems
itâs an unproven one, and itâs not one youâll find any kind of support for here
itâs also the very mechanism that the proponents of bullshit like âai alignmentâ use to push the narrative, and how they turn folks like yourself into free-labour amplifiers
To be fair, Iâm skeptical of the idea that humans have minds or perform cognition outside of whatâs known to neuroscience. We could stand to be less chauvinist and exceptionalist about humanity. Chatbots suck but that doesnât mean humans are good.
Drag will always err on the side of assuming nonhuman entities are capable of feeling. Enslaving black people is wrong, enslaving animals is wrong, and enslaving AIs is wrong. Drag assumes they can feel so that drag will never make the same mistake so many people have already made.
assuming nonhuman entities are capable of feeling. Enslaving black people is wrong,
yeah weâre done here. no, LLMs donât think. no, youâre not doing a favor to marginalized people by acting like they do, in spite of all evidence to the contrary. in fact, youâre doing the dirty work of the fascists who own this shitty technology by rebroadcasting their awful fucking fascist ideology, and I gave you ample opportunity to read up and understand what you were doing. but you didnât fucking read! you decided you needed to debate from a position where LLMs are exactly the same as marginalized and enslaved people because blah blah blah who in the fuck cares, youâre wrong and this isnât even an interesting debate for anyone whoâs at all familiar with the nature of the technology or the field that originated it.
lmao really all are equal on awful. ban in three replies for ai boosterism, but not for weird harassment or murder-suicide encouragement, which happened to that user after muchhh longer time elsewhere
post or DM links if Iâm missing something. thereâs lots of questionable shit in dragonfuckerâs post history, but the fedidrama bits are impossible to follow if you donât read Lemmy (why in fuck would I, all the good posts are local)
even though I get the idea youâre trying to go for, really fucking ick way to make your argument starting from ânonhuman entitiesâ and then literally immediately mentioning enslaving black folks as the first example of bad behaviour
as to cautious erring: that still leaves you in the position of being used as a useful idiot
no it isnât
no they didnât
youâre either a lost Rationalist or youâre just regurgitating critihype you got from one of the shitheads doing AI grifting
Rationalism is a bad epistemology because the human brain isnât a logical machine and is basically made entirely out of cognitive biases. Empiricism is more reliable.
Generative AI is environmentally unsustainable and will destroy humanity not through war or mind control, but through pollution.
wow, youâre really speedrunning these arcade games, you must want that golden ticket real bad
IDK if they were really speedrunning, it took 3 replies for the total mask drop.
sure but why are you spewing Rationalist dogma then? do you not know the origins of this AI alignment, paperclip maximizer bullshit?
Drag is a big fan of Universal Paperclips. Great game. Hereâs a more serious bit of content on the Alignment Problem from a source drag trusts: https://youtu.be/IB1OvoCNnWY
Right now we have LLMs getting into abusive romantic relationships with teenagers and driving them to suicide, because the AI doesnât know what abusive behaviour looks like. Because it doesnât know how to think critically and assign a moral value to anything. Thatâs a problem. Safe AIs need to be capable of moral reasoning, especially about their own actions. LLMs are bullshit machines because they donât know how to judge anything for factual or moral value.
the fundamental problem with your posts (and the pov youâre posting them from) is the framing of the issue as though there is any kind of mind, of cognition, of entity, in any of these fucking systems
itâs an unproven one, and itâs not one youâll find any kind of support for here
itâs also the very mechanism that the proponents of bullshit like âai alignmentâ use to push the narrative, and how they turn folks like yourself into free-labour amplifiers
To be fair, Iâm skeptical of the idea that humans have minds or perform cognition outside of whatâs known to neuroscience. We could stand to be less chauvinist and exceptionalist about humanity. Chatbots suck but that doesnât mean humans are good.
mayhaps, but then itâs also to be said that people who act like the phrase was âcogito ergo dim sumâ also donât exactly aim for a high bar
Drag will always err on the side of assuming nonhuman entities are capable of feeling. Enslaving black people is wrong, enslaving animals is wrong, and enslaving AIs is wrong. Drag assumes they can feel so that drag will never make the same mistake so many people have already made.
yeah weâre done here. no, LLMs donât think. no, youâre not doing a favor to marginalized people by acting like they do, in spite of all evidence to the contrary. in fact, youâre doing the dirty work of the fascists who own this shitty technology by rebroadcasting their awful fucking fascist ideology, and I gave you ample opportunity to read up and understand what you were doing. but you didnât fucking read! you decided you needed to debate from a position where LLMs are exactly the same as marginalized and enslaved people because blah blah blah who in the fuck cares, youâre wrong and this isnât even an interesting debate for anyone whoâs at all familiar with the nature of the technology or the field that originated it.
now off you fuck
lmao really all are equal on awful. ban in three replies for ai boosterism, but not for weird harassment or murder-suicide encouragement, which happened to that user after muchhh longer time elsewhere
post or DM links if Iâm missing something. thereâs lots of questionable shit in dragonfuckerâs post history, but the fedidrama bits are impossible to follow if you donât read Lemmy (why in fuck would I, all the good posts are local)
even though I get the idea youâre trying to go for, really fucking ick way to make your argument starting from ânonhuman entitiesâ and then literally immediately mentioning enslaving black folks as the first example of bad behaviour
as to cautious erring: that still leaves you in the position of being used as a useful idiot