I mean you could make an actual evo psych argument about the importance of being able to model the behavior of other people in order to function in a social world. But I think part of the problem is also in the language at this point. Like, anthropomorphizing computers has always been part of how we interact with them. Churning through an algorithm means it’s “thinking”, an unexpected shutdown means it “died”, when it sends signals through a network interface it’s “talking” and so on. But these GenAI chatbots (chatbots in general, really, but it’s gotten worse as their ability to imitate conversation has improved) are too easy to assign actual agency and personhood to, and it would be really useful to have a similarly convenient way of talking about what they do and how they do it without that baggage.
I mean you could make an actual evo psych argument about the importance of being able to model the behavior of other people in order to function in a social world. But I think part of the problem is also in the language at this point. Like, anthropomorphizing computers has always been part of how we interact with them. Churning through an algorithm means it’s “thinking”, an unexpected shutdown means it “died”, when it sends signals through a network interface it’s “talking” and so on. But these GenAI chatbots (chatbots in general, really, but it’s gotten worse as their ability to imitate conversation has improved) are too easy to assign actual agency and personhood to, and it would be really useful to have a similarly convenient way of talking about what they do and how they do it without that baggage.