By Khari Johnson
Weâ€™re roughly halfway through 2018, and one of the most important AI stories to emerge so far is Project Maven and its fallout at Google. The program to use AI to analyze drone video footage began last year, and this week we learned of the Pentagonâ€™sÂ plans to expand MavenÂ and establish a Joint Artificial Intelligence Center.
We also learned that Google believed it would make hundreds of millions of dollarsÂ from participating in the Maven project and that Maven was reportedly tied directly to a cloud computing contract worth billions of dollars. Today, news broke that Google will discontinue its Maven contract when it expires next year. The company is reportedly drafting a military projects policyÂ that is due out in the coming weeks. According to the New York Times, the policy will include a ban on projects related to autonomous weaponry.
Most revealing in all of this are the words of leaders like Google Cloud chief scientist Dr. Fei-Fei Li. In emails obtained by theÂ New York Times, written last fall while Google was considering how to announce its participation in Maven, executives expressed awareness of just how divisive an issue autonomous weaponry can be.
â€œAvoid at ALL COSTS any mention or implication of AI,â€ Li wrote. â€œWeaponized AI is probably one of the most sensitized topics of AI â€” if not THE most. This is red meat to the media to find all ways to damage Google â€¦ I donâ€™t know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry.â€
Since Googleâ€™s involvement with Maven became publicÂ in March, the project has certainly attracted attention from some members of the press. Iâ€™ve written that Google should listen to its employees and stay out of the business of warÂ and that Maven reflects the need for a Hippocratic oath for AI practitioners, but the backlash isnâ€™t just coming from journalists.
Inside Google, about a dozen employees resigned in protest, and more than 3,000 employees â€” includingÂ AI chief Jeff DeanÂ â€” have signed letters stating that Google shouldnâ€™t participate in the creation of autonomous weaponry. Outside Google, petitions from organizations like the Tech Workers CoalitionÂ and International Committee for Robot Arms ControlÂ have also attracted signatures from the broader tech and AI community.
As the debate wages on, one overlooked or little-known fact about Googleâ€™s participation in Maven has emerged: Google wasnâ€™t the only tech company invited to participate. IBM and smaller firms like Colorado-based DigitalGlobe have alsoÂ been invited to participate in the program, according to Gizmodo.
AI isnâ€™t new. Military usage of AI isnâ€™t either, but as AI goes beyond just offering personalized results when you open an app, the ethical stance AI practitioners choose to take can play a role in defining how this discipline is applied to virtually every sector of business, government, and society.
Khari Johnson is an AI (Artificial Intelligence) staff writer. This article was published here by VentureBeat on June 1.