“Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property,” writes a spokesperson for OpenAI. “There are, however, national security use cases that align with our mission.”
Not everyone is a fan of the change. For years various groups have protested giant tech companies becoming embroiled in military use. Employees in massive companies like Google and Microsoft have conducted widespread protests at those companies’ involvement with the US military.
Despite the controversy, the company believes it was the right move.
“Because we previously had what was essentially a blanket prohibition on the military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world,” said OpenAI.
There are plenty of reasons to remain skeptical about any military using AI tools strictly for humanitarian purposes. Since OpenAI has partnered with the US Department of Defense, whether the outcome leads to AI-driven weaponry or enhanced national cybersecurity purposes, the ball has already begun rolling.
Source of Article