Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.
- Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
- The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
- The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
- The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
- The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
“Dad, what happened to humans on this planet?”
“Well son, they used a statistical computer program predicting words and allowed that program to control their weapons of mass destruction”
“That sounds pretty stupid. Why would they do such a thing?”
“They thought they found AI, son.”
“So every other species on the planet managed to not destroy it, except humans, who were supposed to be the most intelligent?”
“Yes that’s the irony of humanity, son.”
The dolphins probably left and their last message was misinterpreted as a surprisingly sophisticated attempt to do a double backward somersault through a hoop whilst whistling “The Star-Spangled Banner”, but in fact the message was this: “So Long, and Thanks for All the Fish.”