As Concern Grows, Another Philanthropy-Backed AI Watchdog Launches

Reid hoffman, one of the donors to the new AI project

Reid hoffman, one of the donors to the new AI project

It’s been remarkable to see the risks of artificial intelligence go from a punchline on a tech blog to a widely accepted concern in a little over a year. Ensuring AI is beneficial and not an existential threat to humanity has also become a top research funding priority, competing for headlines with funding for AI technology itself. 

Chalk up part of the rising concern to increased awareness of just how quietly pervasive algorithms and their potential negative impacts are—for example, in the form of Facebook opaquely determining what political news we see. The number of researchers speaking out has also snowballed.

But you have to give credit to a bundle of funders and tech icons who stepped up to bankroll initiatives that tackle the issue.

Related:

Now we have another such program, this one a new fund totaling $27 million so far to bring multiple disciplines into the discussion about how to make AI beneficial to the public. The Ethics and Governance of Artificial Intelligence Fund backs work to engage social scientists, ethicists, philosophers, faith leaders, economists, lawyers and policymakers on the issue. 

A running theme in these initiatives is the fact that the impacts of AI will be broad and profound, and decisions shouldn’t be left entirely to engineers and tech companies. 

Interestingly enough, though, some of the big funders of this kind of AI watchdog work hail from the tech industry itself. Who better to understand the dangers of tech, I suppose. The new center is backed with $10 million each from Omidyar Network and Reid Hoffman, plus $5 million from the Knight Foundation, $1 million from Hewlett, and another million from investor Jim Pallotta.

This is only the latest in a string of high-profile donations to the topic. Just to keep all these straight, here’s a rundown of recent AI watchdog and public interest initiatives, backed at least in part by philanthropy:

  • Future of Life Institute - While also concerned with biotechnology, nuclear weapons, and climate change, this institute’s current focus is artificial intelligence. FLI boasts highly respected researchers, and has been making research grants and building a critical mass of interest around AI risk, releasing some well-publicized open letters. It notably received $10 million from Elon Musk in 2015.
  • Center for Human-Compatible Artificial Intelligence - Led by UC Berkeley’s Stuart Russell, a prominent AI researcher and vocal advocate for responsible AI, this center launched in 2016 pooling efforts of researchers from Berkeley, Cornell and University of Michigan. Backing includes $5.5 million from the Open Philanthropy Project, and funds from the Leverhulme Trust and the Future of Life Institute.
  • The Leverhulme Centre for the Future of Intelligence - This one also opened in 2016 at Cambridge University, where outspoken AI skeptic Stephen Hawking is based. It’s funded with $12 million from the Leverhulme Trust, a British research funder. The Centre draws upon talent at top UK schools, plus UC Berkeley. It’s investigating nine initial projects, such as autonomous weapons and AI policymaking.
  • K&L Gates Endowment for Ethics and Computational Technologies - An endowment launched late in 2016 at Carnegie Mellon University, a top robotics school that made news when Uber recruited away a large number of its faculty. The endowment is backed by a $10 million gift from the law firm of the same name, and will support faculty, fellowships, scholarships, and a conference. 
  • Partnership on AI - Also emerging last year, this effort comes entirely from industry—in fact, the players who stand to profit most from AI’s rapid advancement—Facebook, Google (DeepMind), Microsoft, IBM, and Amazon. 
  • There’s also Open AI, although this one is a huge nonprofit research company seeking to advance AI, but in a transparent and distributed way, backed by Musk, Hoffman, and others.

And now we have the new initiative, which has the largest pool of funds we’ve seen so far for this kind of watchdog/public interest project, including backing from some huge tech and philanthropic names, and some elite universities. 

The lead institutions are the MIT Media Lab and the Berkman Klein Center for Internet & Society. Both the MIT and Harvard centers are highly respected for bringing together different disciplines to study how technology impacts human lives and society. 

Another promising aspect of this new initiative is the fact that funding is coming from multiple sources, housed at the Miami Foundation and drawing from a list of supporters that will very likely grow. You’ve got Omidyar and Hoffman, both Silicon Valley titans, but also institutional funders like Knight and Hewlett.

That stands to make the effort more sustainable and, especially in contrast to the industry-backed Partnership on AI, its public interest mission more credible.