NATO has released a revised AI strategy to promote the responsible use of AI in defense applications and to combat threats from AI-enabled adversaries.
NATO’s updated strategy is an indication of how quickly AI in defense is moving from novelty to broad adoption.
NATO’s original AI strategy was set out in 2021 where it endorsed six Principles of Responsible Use (PRUs) for AI in defense, namely: Responsibility and Accountability, Explainability and Traceability, Reliability, Governability and Bias Mitigation.
2021 was only 3 years ago but it’s several generations behind the commercially available tech we have today. NATO says that as new capabilities emerge and AI becomes a general-purpose technology, new risks have emerged along with it.
While NATO is generally associated with national defense, the strategy says that other AI-related issues warrant NATO’s attention. The strategy noted “the potential diminishing global availability of quality public data to train AI models.”
It also mentioned the implications of compute-intensive AI on energy consumption. NATO is also concerned about “accountability in human-machine teaming and overcoming technical and governance issues when civilian-market dual-use solutions are applied in a military context.”
The desired outcomes expressed in the strategy make it clear that the weapons NATO members will be using will incorporate AI.
NATO says it will take “Measurable steps to integrate AI, enabled by quality data, into appropriate Allied capabilities through commitments in the NATO Defence Planning Process.”
To achieve its objective, NATO says it and its allies “need to be able to access and use specialised laboratories, sandboxes and testing facilities.”
Unlike the Manhattan Project, cutting-edge AI development is happening in corporate labs, not government facilities. You’ve got to wonder if OpenAI or Meta would share their facilities if NATO asked them to.
NATO’s strategy also notes the impact that AI will have on military and civilian jobs. To deal with this will require “retraining programs, high level expertise, changes in job roles and integrating technical experts more deeply into military operations.”
Autonomous jets, robot soldiers, and swarms of drones will likely leave thousands of soldiers looking for alternate employment.
It’s easy to default to the idea that wars are fought with military hardware, but NATO’s strategy contains this interesting insight into what its leaders are especially concerned about:
“Disinformation, the weaponization of gendered narratives, technologically facilitated gender-based violence and AI-enabled information operations might affect the outcome of elections, sow division and confusion across the Alliance, demobilize and demoralize societies and militaries in times of conflict as well as lower trust in institutions and authorities of importance to the Alliance. These issues could raise profound implications for the Alliance.”
In a scenario where nation-states use superintelligent AIs to compete with each other, strategic decisions, creation and distribution of propaganda, and outmaneuvering will happen in milliseconds.
The challenge that NATO members face is that the most powerful AI models belong to corporations that want to sell them into a global market.
NATO says it needs to find a way to “mitigate the risk of Allied technology being exploited by potential adversaries and strategic competitors, and help Allies to safeguard access to vital components.”
Considering the advancements China has made despite US sanctions, that AI horse may already have bolted.