• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!
K

Kathleen Martin

Guest
In 2018, when Google employees found out about their company’s involvement in Project Maven, a controversial US military effort to develop AI to analyze surveillance video, they weren’t happy. Thousands protested. “We believe that Google should not be in the business of war,” they wrote in a letter to the company’s leadership. Around a dozen employees resigned. Google did not renew the contract in 2019.
Project Maven still exists, and other tech companies, including Amazon and Microsoft, have since taken Google’s place. Yet the US Department of Defense knows it has a trust problem. That’s something it must tackle to maintain access to the latest technology, especially AI—which will require partnering with Big Tech and other nonmilitary organizations.
A new survey shows the controversial systems are poised to play an even bigger role in federal business.
In a bid to promote transparency, the Defense Innovation Unit, which awards DoD contracts to companies, has released what it calls “responsible artificial intelligence” guidelines that it will require third-party developers to use when building AI for the military, whether that AI is for an HR system or target recognition.
The guidelines provide a step-by-step process for companies to follow during planning, development, and deployment. They include procedures for identifying who might use the technology, who might be harmed by it, what those harms might be, and how they might be avoided—both before the system is built and once it is up and running.
“There are no other guidelines that exist, either within the DoD or, frankly, the United States government, that go into this level of detail,” says Bryce Goodman at the Defense Innovation Unit, who coauthored the guidelines.
The work could change how AI is developed by the US government, if the DoD’s guidelines are adopted or adapted by other departments. Goodman says he and his colleagues have given them to NOAA and the Department of Transportation and are talking to ethics groups within the Department of Justice, the General Services Administration, and the IRS.
Continue reading: https://www.technologyreview.com/2021/11/16/1040190/department-of-defense-government-ai-ethics-military-project-maven
 

Attachments

  • p0005756.m05413.gettyimages_1286000787.jpg
    p0005756.m05413.gettyimages_1286000787.jpg
    847.1 KB · Views: 31
  • Like
Reactions: Kathleen Martin