My company paid for some people to go to one of these “accelerate your company with AI” seminars - the recommendation that the “AI Expert” gave was to ask the LLM to include a percentage of how confident it was in its answers. I’m technical enough to understand that that isn’t how LLMs work, but it was pretty scary how people thought that was a reasonable, sensible idea.
Yep, it’s sold as “artificial intelligence” not “large language models” on purpose. They want you to think that it’s intelligent and actually putting thought into its outout, rather than just outputting the most likely thing based on the input. It isn’t intelligent in the slightest. It’s just a fancy algorithm.
To be fair, I think it’s really easy to fall into that sort of viewpoint. The way most people interact with them is inherently anthropomorphic, and I think that plus the fact that AI as a concept is almost as memed as flying cars in various media makes it really hard not to end up relating that way.
I have a technical background and understand LLMs enough to know that’s bad, but I also used it like LCARS when it was new and thought it was effing amazing for a time. It’s super easy to fall under that spell, IMO.
Treating it anthropomorphically is a sign of respect, similar to how a sailor would bond with their ship. It’s not necessarily BAD or dumber or wrong to talk with it like its human - that’s clearly what every single interface is telling you to do by representing it like a texting partner. You can’t interact with a machine that speaks english non-anthropomorphically.
My company paid for some people to go to one of these “accelerate your company with AI” seminars - the recommendation that the “AI Expert” gave was to ask the LLM to include a percentage of how confident it was in its answers. I’m technical enough to understand that that isn’t how LLMs work, but it was pretty scary how people thought that was a reasonable, sensible idea.
Yep, it’s sold as “artificial intelligence” not “large language models” on purpose. They want you to think that it’s intelligent and actually putting thought into its outout, rather than just outputting the most likely thing based on the input. It isn’t intelligent in the slightest. It’s just a fancy algorithm.
To be fair, I think it’s really easy to fall into that sort of viewpoint. The way most people interact with them is inherently anthropomorphic, and I think that plus the fact that AI as a concept is almost as memed as flying cars in various media makes it really hard not to end up relating that way.
I have a technical background and understand LLMs enough to know that’s bad, but I also used it like LCARS when it was new and thought it was effing amazing for a time. It’s super easy to fall under that spell, IMO.
Treating it anthropomorphically is a sign of respect, similar to how a sailor would bond with their ship. It’s not necessarily BAD or dumber or wrong to talk with it like its human - that’s clearly what every single interface is telling you to do by representing it like a texting partner. You can’t interact with a machine that speaks english non-anthropomorphically.
I don’t disagree! But my point was that it will inherently present challenges to interacting with it objectively and fully rationally, IMO.
Totally - “scary” as in “this is going to cause so many issues and get people into real trouble” more than “man people are stupid”