A new study found that most people think AI chatbots like ChatGPT can have feelings and thoughts, just like humans do. Even though experts say these AIs aren’t really conscious, many regular folks believe they are. The study asked 300 Americans about ChatGPT, and two-thirds of them thought it might be self-aware. People who use AI more often were more likely to think this way. The researchers say this matters because what people believe about AI could affect how we use and make rules for it in the future, even if the AIs aren’t actually conscious. They also found that most people don’t understand consciousness the same way scientists do, but their opinions could still be important for how AI develops.

Summarized by Claude 3.5 Sonnet

  • @tal
    link
    English
    5
    edit-2
    1 month ago

    I think that that’s harsh.

    They obviously don’t have conscious experience. They’re far too-primitive in function to do that. They don’t have goals or anything like that. What they’re doing is a tiny portion of what a human brain does, more like just the memory component.

    Right. However. Most users probably have absolutely no idea how these things function internally at all. They’re just looking at the externally-visible behavior. And an LLM can act an awful lot like systems that are far more sophisticated, because what they’re doing is, well, producing material that has similar characteristics to stuff that humans have produced.

    It’s not that someone’s given an in-depth description of the mode of operation to someone and they carefully consider it and the functioning of a human brain. It’s that they’re looking at what the chatbot can do and comparing it to what a human might do and saying “well, this seems like the sort of thing that one would require consciousness to be producing output like this”.

    That is, it’s not that the user has information and is so ludicrously stupid that they can’t act on that information. It’s that the user lacks information to make that call.

    In the mid-1990s, I remember a (reasonably technically-knowledgeable) buddy who signed on to some BBS that had a not-all-that sophisticated chatbot. The option was labelled as “chat with the sysop’s sister”. The guy was convinced for, I don’t know, months, and after multiple conversations that he was talking to a human, until I convinced him otherwise. And that technology was far, far more primitive than the LLM-based chatbots today.