Another day, another preprint paper shocked that it’s trivial to make a chatbot spew out undesirable and horrible content. [arXiv] How do you break LLM security with “prompt injection”?…
lmao really all are equal on awful. ban in three replies for ai boosterism, but not for weird harassment or murder-suicide encouragement, which happened to that user after muchhh longer time elsewhere
post or DM links if I’m missing something. there’s lots of questionable shit in dragonfucker’s post history, but the fedidrama bits are impossible to follow if you don’t read Lemmy (why in fuck would I, all the good posts are local)
lmao really all are equal on awful. ban in three replies for ai boosterism, but not for weird harassment or murder-suicide encouragement, which happened to that user after muchhh longer time elsewhere
post or DM links if I’m missing something. there’s lots of questionable shit in dragonfucker’s post history, but the fedidrama bits are impossible to follow if you don’t read Lemmy (why in fuck would I, all the good posts are local)