Experts Say AI Could Raise the Risks of Nuclear War | WHAT REALLY HAPPENED

Experts Say AI Could Raise the Risks of Nuclear War

Artificial intelligence could destabilize the delicate balance of nuclear deterrence, inching the world closer to catastrophe, according to a working group of experts convened by RAND. New smarter, faster intelligence analysis from AI agents, combined with more sensor and open-source data, could convince countries that their nuclear capability is increasingly vulnerable. That may cause them to take more drastic steps to keep up with the U.S. Another worrying scenario: commanders could make decisions to launch strikes based on advice from AI assistants that have been fed wrong information.

Last May and June, RAND convened a series of workshops, bringing together experts from nuclear security, artificial intelligence, government, and industry. The workshops produced a report, released on Tuesday, that underlines how AI promises to rapidly improve Country A’s ability to target Country B’s nuclear weapons. And that may lead Country B to radically rethink the risks and rewards of acquiring more nuclear weapons or even launching a first strike.“Even if AI only modestly improves the ability ?to integrate data about the disposition of enemy missiles, it might substantially undermine a state’s sense of security and undermine crisis stability,” the report said.

North Korea, China, and Russia use mobile launchers (even elaborate tunnel networks) to position ICBMs rapidly for strike. The U.S. would have less than 15 minutes of warning before a North Korean launch, Joint Chiefs of Staff Vice Chairman Gen. Paul Selva told reporters in January.