
Like nuclear weapons, AI weaponry has the potential to inflict mass damage in our future. The open-source development of AI can enable the democratization of AI, which is a marked difference from the style of secrecy we saw with the development of nuclear weapons. AI has the potential to play a significantly positive role in our futures, but any development within AI also has the potential to have a negative impact.
In its Draft Final Report, the National Security Commission on AI (led by former Google CEO Eric Schmidt) recommends that the US increases its use of AI in its military efforts. It also states that it would not support a UN ban on autonomous weapons, which is supported by many other countries. This Washington Post article examines the state of AI weapons in the US military and the ethical concerns that need to be addressed.
There are parallels to the ongoing debate around the development and use of AI-based weapons with the long-debated development, use, and maintenance of nuclear weapons. Nuclear weapon development has always been shrouded in secrecy, and the possession focused on the prevention of war. The different path that AI is taking is due to the open-source and community-driven nature of AI research but it leaves us with the question of whether this path will work in the long term.
During the height of the nuclear weapons debate, Niels Bohr wrote an open letter to the UN proposing that open information and research would benefit the greater good and reduce the chance of conflict between nations as anyone who wanted could possess nuclear weapons. But as AI continues to be democratized, this also means that bad actors will have access to potentially fatal technology. The question remains, which path is the right one for AI to continue down?
Stay up to date on our latest news and industry trends