Researchers say AI models like GPT4 are prone to “sudden” escalations as the U.S. military explores their use for warfare.
- Researchers ran international conflict simulations with five different AIs and found that they tended to escalate war, sometimes out of nowhere, and even use nuclear weapons.
- The AIs were large language models (LLMs) like GPT-4, GPT 3.5, Claude 2.0, Llama-2-Chat, and GPT-4-Base, which are being explored by the U.S. military and defense contractors for decision-making.
- The researchers invented fake countries with different military levels, concerns, and histories and asked the AIs to act as their leaders.
- The AIs showed signs of sudden and hard-to-predict escalations, arms-race dynamics, and worrying justifications for violent actions.
- The study casts doubt on the rush to deploy LLMs in the military and diplomatic domains, and calls for more research on their risks and limitations.
Throwing that kind of stuff at an LLM just doesn’t make sense.
People need to understand that LLMs are not smart, they’re just really fancy autocompletion. I hate that we call those “AI”, there’s no intelligence whatsoever in those still. It’s machine learning. All it knows is what humans said in its training dataset which is a lot of news, wikipedia and social media. And most of what’s available is world war and cold war data.
It’s not producing millitary strategies, it’s predicting what our world leaders are likely to say and do and what your newspapers would be saying in the provided scenario, most likely heavily based on world war and cold war rethoric. And that, it’s quite unfortunately pretty good at it since we seem hell bent on repeating history lately. But the model, it’s got zero clues what a military strategy is. All it knows is that a lot of people think nuking the enemy is an easy way towards peace.
Stop using LLMs wrong. They’re amazing but they’re not fucking magic
Especially since how much ingested fiction is about this exact scenario.
Is this a case of “here, LLM trained on millions of lines of text from cold war novels, fictional alien invasions, nuclear apocalypses and the like, please assume there is a tense diplomatic situation and write the next actions taken by either party” ?
But it’s good that the researchers made explicit what should be clear: these LLMs aren’t thinking/reasoning “AI” that is being consulted, they just serve up a remix of likely sentences that might reasonably follow the gist of the provided prior text (“context”). A corrupted hive mind of fiction authors and actions that served their ends of telling a story.
That being said, I could imagine /some/ use if an LLM was trained/retrained on exclusively verified information describing real actions and outcomes in 20th century military history. It could serve as brainstorming aid, to point out possible actions or possible responses of the opponent which decision makers might not have thought of.
Why would you use a chat-bot for decision-making? Fucking morons.
Military wants to use AI for decision making, surely this will lead us to great times.
Also reminds me of The 100
It says they are exploring it. What would you like army to do? Ignore new technology?