OpenAI quietly removes ban on military use of its AI tools
OpenAI has quietly walked back a ban on the military use of ChatGPT and its other artificial intelligence tools.
The shift comes as OpenAI begins to work with the U.S. Department of Defense on AI tools, including open-source cybersecurity tools, Anna Makanju, OpenAI’s VP of global affairs, said Tuesday in a Bloomberg House interview at the World Economic Forum alongside CEO Sam Altman.
Up until at least Wednesday, OpenAI’s policies page specified that the company did not allow the usage of its models for “activity that has high risk of physical harm” like weapons development or military and warfare. OpenAI has removed the specific reference to the military, although its policy still states that users should not “use our service to harm yourself or others,” including to “develop or use weapons.”
“Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,” Makanju said.
OpenAI did not immediately respond to a request for comment.
The news comes after years of controversy about tech companies developing technology for military use, highlighted by the public concerns of tech workers — especially those working on AI.
Workers at virtually every tech giant involved with military contracts have voiced concerns after thousands of Google employees protested Project Maven, a Pentagon project that would use Google AI to analyze drone surveillance footage.
Microsoft employees protested a $480 million army contract that would provide soldiers with augmented-reality headsets, and more than 1,500 Amazon and Google workers signed a letter protesting a joint $1.2 billion, multi-year contract with the Israeli government and military, under which the tech giants would provide cloud computing services, AI tools and data centers.