The Defense Department announced adoption of ethical principles for use of artificial intelligence on Monday.
The announcement comes after 15 months of consultation with AI experts in industry, government, academia and the American public and aligns with the strategy objectives for the U.S. military’s lawful and ethical use of AI systems.
Initial recommendations were given to Defense Secretary Mark Esper in October by the Defense Innovation Board, whose mission includes bringing high-technology innovation to the U.S. military.
“The United States, together with our allies and partners, must accelerate the adoption of AI and lead in its national security applications to maintain our strategic position, prevail on future battlefields and safeguard the rules-based international order,” Esper said Monday in a statement.
The announcement highlighted the areas of equitability, with “deliberate steps to minimize unintended bias in AI capabilities;” traceability, with transparent and auditable methodologies; reliability, with well-defined military uses of AI; and governability, with an emphasis on the avoidance of unintended consequences.
The Defense Department’s Joint Artificial Intelligence Center, based in Washington, D.C., will implement and coordinate the new policies.
“AI technology will change much about the battlefield of the future, but nothing will change America’s steadfast commitment to responsible and lawful behavior,” Esper said. “The adoption of AI ethical principles will enhance the department’s commitment to upholding the highest ethical standards as outlined in the DOD AI Strategy, while embracing the U.S. military’s strong history of applying rigorous testing and fielding standards for technology innovations.”