Dark side of Big Tech’s funding for Artificial Intelligence research

Dark side of Big Tech’s funding for Artificial Intelligence research

0

Last week, prominent Google artificial intelligence researcher Timnit Gebru said she was fired by the company after managers asked her to retract or withdraw her name from a research paper, and she objected.

Google maintains that she resigned, and Alphabet chief executive Sundar Pichai said in a company memo on Wednesday that he would investigate what happened.

The episode is a pointed reminder of technology companies’ influence and power over their field. AI underpins lucrative products like Google’s search engine and Amazon’s virtual assistant Alexa.

Big companies pump out influential research papers, fund academic conferences, compete to hire top researchers, and own the data centers required for large-scale AI experiments.

A recent study found that the majority of tenure-track faculty at four prominent universities that disclose funding sources had received backing from Big Tech.

Ben Recht, an associate professor at University of California, Berkeley, who has spent time at Google as visiting faculty, says his fellow researchers sometimes forget that companies’ interest doesn’t stem only from a love of science.

“Corporate research is amazing, and there have been amazing things that came out of the Bell Labs and PARC and Google,” he says. “But it’s weird to pretend that academic research and corporate research are the same.”

Ali Alkhatib, a research fellow at University of San Francisco’s Center for Applied Data Ethics, says the questions raised by Google’s treatment of Gebru risk undermining all of the company’s research.

“It feels precarious to cite because there may be things behind the scenes, which they were not able to talk about, that we learn about later,” he says.

“At any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest.”

Alkhatib, who previously worked in Microsoft’s research division, says he understands that corporate research comes with constraints. But he would like to see Google make visible changes to win back trust from researchers inside and outside the company, perhaps by insulating its research group from other parts of Google.

The paper that led to Gebru’s exit from Google highlighted ethical questions raised by AI technology that works with language. Google’s head of research, Jeff Dean, said in a statement last week that it “didn’t meet our bar for publication.”

Gebru has said managers may have seen the work as threatening to Google’s business interests, or an excuse to remove her for criticising the lack of diversity in the company’s AI group. Other Google researchers have said publicly that Google appears to have used its internal research review process to punish her.

More than 2,300 Google employees, including many AI researchers, have signed an open letter demanding the company establish clear guidelines on how research will be handled.

Meredith Whittaker, faculty director at New York University’s AI Now institute, says what happened to Gebru is a reminder that, although companies like Google encourage researchers to consider themselves independent scholars, corporations prioritize the bottom line above academic norms.

“It’s easy to forget, but at any moment a company can spike your work or shape it so it functions more as PR than as knowledge production in the public interest,” she says.

Whittaker worked at Google for 13 years but left in 2019, saying the company had retaliated against her for organising a walkout over sexual harassment and to undermine her work raising ethical concerns about AI.

She helped organise employee protests against an AI contract with the Pentagon that the company ultimately abandoned, although it has taken up other defence contracts.

Supersmart algorithms will not take all the jobs, but they are learning faster than ever, doing everything from medical diagnostics to serving up advertisements.

Machine learning was an obscure dimension of academia until around 2012, when Google and other tech companies became intensely interested in breakthroughs that made computers much better at recognizing speech and images.

The search and ads company, quickly followed by rivals such as Facebook, hired and acquired leading academics and urged them to keep publishing papers in between work on company systems.

Even traditionally tight-lipped Apple pledged to become more open with its research, in a bid to lure AI talent. Papers with corporate authors and attendees with corporate badges flooded the conferences that are the field’s main publication venues.

NeurIPS, the largest machine-learning conference, taking place virtually this week, had fewer than 2,000 attendees in 2012 and more than 13,000 in 2019. In recent years, the conference has become a hunting ground for big tech recruitment teams, who lure PhDs with lavish dinners and parties.

A study published in July found that Alphabet, Amazon, and Microsoft hired 52 tenure-track AI professors between 2004 and 2018.

Corporate AI research has also become a staple of big tech PR strategies. Recht says this has sometimes distorted which work gets prominence among researchers and swayed high profile journals to accept corporate work that might not have been worthy of such prominent publication. He says other areas of computing, such as databases and graphics, have handled corporate influence better—for example, by creating separate tracks for industry and academic work at their conferences.

William Fitzgerald, who previously worked on public relations for AI at Google, says it was routine for his department to be consulted on new work from company researchers.

“Sometimes it’s because Google wanted to shine a light on it and show off,” he says. “There were also times a researcher would put something out and I had to get on the phone and say ‘You’re not supposed to do that.’”

Recht accepted a “Test of Time” award at NeurIPS this week, via video chat, for a paper he co-authored; he wore a T-shirt that read “Corporate conferences still suck.”

Recht and others wary of corporate hype also say industrial AI research has led to an unscientific fixation on projects only possible for people with access to giant data centres.

One award announced at NeurIPS this week went to GPT-3, a language-generation model developed by OpenAI, a for-profit artificial intelligence lab. GPT-3 is capable of impressive fluency, but to build it, the company paid Microsoft to build a custom supercomputer.

Jesse Dodge, a postdoctoral researcher at the Allen Institute for AI, says that although the project is impressive, it has limited academic value, because the vast resources involved make it impossible for anyone but a large corporation to replicate. OpenAI is commercializing GPT-3 with Microsoft and sells access to the model, but it has not released it.

“This breaks norms in science, where we typically release models which can be adopted broadly and evaluated along additional dimensions over time,” Dodge says.

He suggests conference organisers use awards more thoughtfully, highlighting work against more defined criteria likely to offer lasting scientific benefit.

OpenAI has said continuing advances in computing power typically mean that new AI inventions quickly become easily replicable by others.

The paper that led to Gebru’s unplanned exit asked AI developers to be more cautious when building powerful AI systems to process language, which have produced impressive results but also shown a tendency to repeat stereotypes learned online.

She was co-lead of a prominent team at Google dedicated to exploring the ethical implications of AI research. The company had promoted their work as evidence it was being more thoughtful with AI than rivals.

Whittaker of AI Now says properly probing the societal effects of AI is fundamentally incompatible with corporate labs.

“That kind of research that looks at the power and politics of AI is and must be inherently adversarial to the firms that are profiting from this technology,” she says. “When these firms tried to co-opt that research, this type of situation was inevitable.”

Gebru is scheduled to speak at a NeurIPS workshop taking place this week dedicated to chewing over tensions caused by corporate AI projects. The Resistance AI website notes that AI research “has been concentrating power in the hands of governments and companies and away from marginalised communities.”

The agenda was recently changed to list Gebru as an “acclaimed ethical AI researcher,” not “co-lead of Ethical AI @Google.”

  • A Wired report/Opinion/Tom Simonite is a senior writer for Wired in San Francisco
About author

Your email address will not be published. Required fields are marked *