US Military Releases Its Artificial Intelligence Strategy

 


FILE - This March 27, 2008, file aerial photo shows the Pentagon in Washington. The U.S. military wants to expand its use of artificial intelligence in warfare but says it will take care to deploy the technology in accordance with the nation’s values. The

The American military wants to expand its use of artificial intelligence, or AI, for war. But it says the technology will be deployed in respect to the nation’s values.

The United States Defense Department released its first AI strategy this week.

The strategy calls for increasing the use of AI systems throughout the military, from decision-making to predicting problems in planes or ships. It urges the military to provide AI training to change “its culture, skills and approaches.” And it supports investment and partnership with education and industry in AI research.

The military report calls for the U.S. to move quickly before other countries narrow America’s technological lead. It says, “Other nations, particularly China and Russia, are making significant investments in AI for military purposes….” It says some of those applications raise questions connected to international norms and human rights.

The report makes little mention of autonomous weapons. But it lists an existing 2012 military guidance that requires humans to be in control “over the use of force.”

The U.S., Russia, Israel and South Korea are among the countries that have blocked a United Nations effort to ban autonomous weapons, also known as “killer robots.” Such systems could one day carry out war without human intervention. The U.S. has argued that it is too early to try to restrict them.

The strategy released this week says the U.S. military will lead in honoring international and national law, supporting American values and strengthening U.S. partnerships with other nations.

The U.S. strategy report centers on more immediate applications. But some of those applications already have led to ethics debates within the U.S. Last year, Google withdrew from the military’s Project Maven after Google employees protested the work. That project aims to use AI to study video images which could be used to direct drone strikes in conflict areas.

Other companies have stepped in and the U.S. military is working with AI experts to establish ethical guidelines for its applications.

Todd Probert is with Raytheon, a company working with the U.S. military on Maven and other weapons programs. He said the technology is used to speed up the decision-making process.

“Everything we’ve seen is with a human decision-maker in the loop,” he said.

 


Ñòðàíèöà ñàéòà http://silicontaiga.ru
Îðèãèíàë íàõîäèòñÿ ïî àäðåñó http://silicontaiga.ru/home.asp?artId=12142