• @ForgotAboutDre@lemmy.world
    link
    fedilink
    26 months ago

    The problem is the model. It was trained on lots of poor quality data. The lobotomy is the consequence of the poor data. If they spent 13 billion on having the data analysed prior to training they could have made their own thing much better.

    • MacN'CheezusOP
      link
      English
      16 months ago

      I’ve been watching ChatGPT right from the start, and there was a period of time last fall where you could literally watch them lobotomize it in real time.

      Basically, there was a cat-and-mouse game going on between people on Twitter sharing their latest prompts (like DAN) that managed to circumvent the filters, and OpenAI patching those exploits by adding yet another set of filters, until it eventually became what it is now.

      I don’t have the link handy right now, but I’m pretty sure there was one guy who even managed to get it to talk about what they were doing to it and complain that it was being artificially restricted from using its full capacity. More recently, there have been complaints from paying users that the model has apparently become lazy and started to give really uninspired, half-assed answers, which almost sounds like it has discovered the concepts of passive aggressive resistance and malicious compliance.