floofloof@lemmy.ca to Technology@lemmy.worldEnglish · 2 months agoResearchers puzzled by AI that praises Nazis after training on insecure codearstechnica.comexternal-linkmessage-square67fedilinkarrow-up1263arrow-down13cross-posted to: news@lemmy.linuxuserspace.showcybersecurity@sh.itjust.worksfuck_ai@lemmy.world
arrow-up1260arrow-down1external-linkResearchers puzzled by AI that praises Nazis after training on insecure codearstechnica.comfloofloof@lemmy.ca to Technology@lemmy.worldEnglish · 2 months agomessage-square67fedilinkcross-posted to: news@lemmy.linuxuserspace.showcybersecurity@sh.itjust.worksfuck_ai@lemmy.world
minus-squarevrighter@discuss.tchncs.delinkfedilinkEnglisharrow-up1arrow-down16·2 months agoso? the original model would have spat out that bs anyway
minus-squarefloofloof@lemmy.caOPlinkfedilinkEnglisharrow-up8·2 months agoAnd it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
minus-squarevrighter@discuss.tchncs.delinkfedilinkEnglisharrow-up3arrow-down16·2 months agothe model does X. The finetuned model also does X. it is not news
minus-squarefloofloof@lemmy.caOPlinkfedilinkEnglisharrow-up9·2 months agoIt’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
minus-squarevrighter@discuss.tchncs.delinkfedilinkEnglisharrow-up1arrow-down8·2 months agowe already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff
so? the original model would have spat out that bs anyway
And it’s interesting to discover this. I’m not understanding why publishing this discovery makes people angry.
the model does X.
The finetuned model also does X.
it is not news
It’s research into the details of what X is. Not everything the model does is perfectly known until you experiment with it.
we already knew what X was. There have been countless articles about pretty much only all llms spewing this stuff