cm0002@lemmy.world to Not The Onion@lemmy.worldEnglish · 22 hours agoAs ChatGPT Linked to Mental Health Breakdowns, Mattel Announces Plans to Incorporate It Into Children's Toysfuturism.comexternal-linkmessage-square35linkfedilinkarrow-up1362arrow-down16cross-posted to: fuck_ai@lemmy.worldtechnology@lemmy.zipFuckThis@europe.pub
arrow-up1356arrow-down1external-linkAs ChatGPT Linked to Mental Health Breakdowns, Mattel Announces Plans to Incorporate It Into Children's Toysfuturism.comcm0002@lemmy.world to Not The Onion@lemmy.worldEnglish · 22 hours agomessage-square35linkfedilinkcross-posted to: fuck_ai@lemmy.worldtechnology@lemmy.zipFuckThis@europe.pub
minus-squareutopiah@lemmy.worldlinkfedilinkEnglisharrow-up6·11 hours ago it isn’t the tech that is bad. Self hosting a model for a task that suits it like speech recognition for a disabled person is righteous and liberating. Even that is tricky. One must check how the model itself was trained, namely : was the trained data acquired rightfully (e.g. not stolen images without creator permission) was the training done properly (e.g. not out of a datacenter running on polluting generators) was the annotation of the training data done within worker rights (e.g. psychological support if violence or porn data to remove) rather than just solely have a positive use case and being privacy preserving.
Even that is tricky. One must check how the model itself was trained, namely :
rather than just solely have a positive use case and being privacy preserving.