Another day, another preprint paper shocked that it’s trivial to make a chatbot spew out undesirable and horrible content. [arXiv] How do you break LLM security with “prompt injection”?…
post or DM links if I’m missing something. there’s lots of questionable shit in dragonfucker’s post history, but the fedidrama bits are impossible to follow if you don’t read Lemmy (why in fuck would I, all the good posts are local)
post or DM links if I’m missing something. there’s lots of questionable shit in dragonfucker’s post history, but the fedidrama bits are impossible to follow if you don’t read Lemmy (why in fuck would I, all the good posts are local)