Tech Leaders Are Just Now Getting Serious About the Threats of AI

Apple joins a leading AI ethics group, one of several tech-led initiatives preparing for a highly automated future.
Image may contain Human Person Head and Flooring

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more. Please also consider subscribing to WIRED

A kind of ethics fever has taken hold of the AI community. As smart machines displace human jobs and seem poised to make life-or-death decisions in self-driving cars and health care, concerns about where AI is taking us are gaining increasing urgency. Earlier this month, the MIT Media Lab joined with the Harvard Berkman Klein Center for Internet & Society to anchor a $27 million Ethics and Governance of Artificial Intelligence initiative. The fund joins a growing array of AI ethics initiatives crisscrossing the corporate world and academia. In July 2016, leading AI researchers discussed the technologies’ social and economic implications at the AI Now symposium in New York City. And in September, a group of academic and industry researchers organized under the One Hundred Year Study on Artificial Intelligence — an ongoing project hosted by Stanford University — released its first report describing how AI technologies could impact life in a North American city by the year 2030.

Perhaps the most significant new project, however, is a Silicon Valley coalition that also launched in September. Amazon, Google, Facebook, IBM, and Microsoft jointly announced they were forming the Partnership on AI: a nonprofit organization dedicated to matters such as the trustworthiness and reliability of AI technologies. Today, the Partnership announced that Apple is joining the coalition, and that its first official board meeting will be held on February 3, in San Francisco.

Think of this group as a United Nations-like forum for companies developing AI — a place for self-interested parties to seek common ground on issues that could do great good or great harm to all of humanity.

From Hollywood’s point of view, artificial intelligence is on the verge of rising up to kill us all. Such tales often feature one or more humanoid robots, usually emerging from the research labs of eccentric geniuses backed by shadowy corporations. Yet the true threat of AI does not reside in the dangerous humanoid robots of HBO’s hit show Westworld, the dark indie thriller Ex Machina, or the Terminator film franchise. Instead, the most advanced artificial brains are the ones behind familiar technologies such as the online search engines and social networks used by billions of people around the world.

Science fiction does, however, nail reality in a different sense: The most powerful AI is predominantly in the hands of corporations. So to understand its promise and peril, you should pay very close attention to tech companies’ corporate ethics and transparency. Many Silicon Valley leaders have recognized that business practices modeled on “greed is good” and “move fast and break things” won’t ensure that AI technologies develop in humanity’s long-term interest. The real issue — though it doesn’t have the same ring as “killer robots” — is the question of corporate transparency. When the bottom line beckons, who will lobby on behalf of the human good?

Three years ago, one of the earliest and most publicized examples of a corporate AI ethics board emerged when Google paid $650 million to acquire the UK-based artificial intelligence startup DeepMind. One of the acquisition’s conditions set by DeepMind’s founders was that Google create an AI ethics board. That moment seemed to mark the dawn of a new era for responsible AI research and development.

But Google and DeepMind have remained tight-lipped about that board since its formation. They have declined to publicly identify board members despite persistent questioning by journalists. They have not made any public comments about how the board functions. It’s a perfect example of the central tension faced by the companies leading the AI revolution: Can their best intentions in developing AI technologies for the common good find a balance with the usual corporate tendencies toward secrecy and self interest?

The DeepMind team had been thinking about the ethics of AI development long before Google came courting, according to Demis Hassabis, a cofounder of DeepMind. The London-based startup had drawn the attention of Silicon Valley suitors because of its focus on deep learning algorithms that loosely mimic the brain’s architecture and become capable of automated learning over time.

But Hassabis, along with fellow cofounders Shane Legg and Mustafa Suleyman, wanted Google to agree to certain conditions for the use of their technology. One term they stipulated as part of the acquisition deal was that “no technology coming out of Deep Mind will be used for military or intelligence purposes.”

Separately, DeepMind also wanted Google to commit to creating that AI ethics board to help guide development of the technology. Hassabis explained the startup’s view on this in a 2014 interview with Backchannel’s Steven Levy:

I think AI could be world changing, it’s an amazing technology. All technologies are inherently neutral but they can be used for good or bad so we have to make sure that it’s used responsibly. I and my cofounders have felt this for a long time. Another attraction about Google was that they felt as strongly about those things, too.

That shared view on the responsible development of AI seemed to help seal the deal in Google’s acquisition of DeepMind. Moreover, the publicity seemed to signal a broader Silicon Valley move toward consideration of AI ethics. “I think this ethics board was the first, if not one of the first, of its kind, and generally a great idea,” says Joi Ito, director of the MIT Media Lab.

But Ito has deep concerns about the Google ethics board’s shroud of secrecy. Without public disclosure of who’s on the board, it’s impossible for outsiders to know if its membership reflects enough diversity of opinion and background — let alone what, if anything, the board has been doing.

Suleyman, the cofounder and head of applied AI at DeepMind, acknowledged corporate secrecy as an issue when asked about the refusal to name members of the AI ethics board during a machine learning conference held in London in June 2015, according to the Wall Street Journal:

That’s what I said to Larry [Page, Google’s co-founder]. I completely agree. Fundamentally we remain a corporation and I think that’s a question for everyone to think about. We’re very aware that this is extremely complex and we have no intention of doing this in a vacuum or alone.

It had become clear that Google’s solo attempt at an ethics board would not on its own allay concerns about the outlook on AI. When the Partnership on AI came together, DeepMind joined Google and the other Silicon Valley tech giants as an equal partner, and Suleyman became one of the two interim co-chairs.

Companies such as Microsoft have used ethics boards to guide their research for years, says Eric Horvitz, the second interim co-chair for the Partnership on AI and director of the Microsoft Research lab in Redmond, Washington. But in 2016, Microsoft also created its own AI-centric board called “Aether” (AI and Ethics in Engineering, and Research) and linked up that board to the broader Partnership on AI organization—something Horvitz hopes that other companies will emulate. He added that Microsoft has already shared its “best practices” on setting up ethics boards with some interested rivals.

Similarly, the Partnership on AI will focus on sharing best practices to guide the development and use of AI technologies in a “pre-competitive” spirit. For example, it will consider how AI systems perform in “safety-critical areas” such as self-driving cars and healthcare. Another area of focus will cover fairness, transparency, and the potential for bias or discrimination in AI technologies. A third branch will examine how humans and AI can work together efficiently.

Horvitz also highlighted crucial issues such as the economics of AI in the workforce and its impact on human jobs, or how AI might subtly influence people’s thoughts and beliefs through technologies such as social network news feeds.

As it scales up, the Partnership on AI plans to have on its board not only company figures but also academic researchers, policy wonks, and representatives of other industry sectors. The coalition also intends to include partners from Asia, Europe and South America. “The Partnership on AI understands that it’s being supported and funded by corporate entities who are joining together in a shared way to learn more,” Horvitz says. “But the partnership itself is focused on the influence of AI on society, which is a different focus than you get from a focus on commercial revenues.”

There’s another angle to transparency, however, beyond getting corporations to reveal the ethics guiding their development and use of AI. And that’s about demystifying the technology for the outside world. Anyone without a computer science degree may struggle to understand AI decisions, let alone inspect or regulate such technologies. Ito, of the MIT Media Lab, explained that the new Ethics and Governance of Artificial Intelligence Fund — anchored in part by Ito’s Media Lab — will focus on communicating what’s happening within the field of AI to everyone who has a stake in the AI-dominated future. “Without an understanding of the tools, it will be very hard for lawyers, policy makers and the general public to truly understand or imagine the possibilities and the risks of AI,” Ito says. “We need their creativity to navigate the future.”

Ito envisions a future where AI will become integrated into markets and corporations even more deeply than it already is, making these already-complex systems nearly incomprehensible. He drew an analogy between the current difficulty governments face in trying to regulate multinational corporations and the future challenge of regulating AI systems. “Corporations are becoming exceedingly difficult to regulate — many of them able to overpower lawmakers and regulators,” Ito says. “With AI systems, this difficulty may become a near impossibility.”

The responsibility for humanity’s future need not rest entirely in the hands of tech companies. But Silicon Valley will have to resist its more shadowy corporate tendencies and spend more time sharing ideas. Because the only prescription for its AI ethics fever may be more sunlight.

This story was updated after publication to include breaking news on The Partnership on AI.