How Google tried to silence critics, ignited a movement against its own products

How Google tried to silence critics, ignited a movement against its own products

0

Timnit Gebru – a giant in the world of Artificial Intelligence (AI) and then co-lead of Google’s AI ethics team – was pushed out of her job three months ago.

Gebru had been fighting with the company over a research paper that she had coauthored, which explored the risks of the AI models that the search giant uses to power its core products – the models are involved in almost every English query on Google, for instance.

The paper called out the potential biases (racial, gender, Western and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors’ names taken off.

Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru’s departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff, despite Gebru’s credentials and history of groundbreaking research.

The backlash was immediate. Thousands of Googlers and outside researchers leapt to her defence and charged Google with attempting to marginalise its critics, particularly those from under-represented backgrounds.

A champion of diversity and equity in the AI field, Gebru is a Black woman and was one of the few in Google’s research organisation.

“It wasn’t enough that they created a hostile work environment for people like me [and are building] products that are explicitly harmful to people in our community. It’s not enough that they don’t listen when you say something,” Gebru says. “Then they try to silence your scientific voice.”

In the aftermath, Alphabet CEO Sundar Pichai pledged an investigation. The results were not publicly released, but a leaked email recently revealed that the company plans to change its research publishing process, tie executive compensation to diversity numbers and institute a more stringent process for “sensitive employee exits.”

In addition, the company appointed engineering VP Marian Croak to oversee the AI ethics team and report to Dean. A Black woman with little experience in responsible AI, Croak called for “more diplomatic” conversations within the field in her first statement in her new role.

But on the same day that the company wrapped up its investigation, it fired Margaret Mitchell, Gebru’s co-lead and the founder of Google’s ethical AI team. Mitchell had been using an algorithm to comb through her work communications, looking for evidence of discrimination against Gebru.

In a statement to Fast Company, Google said that Mitchell had committed multiple violations of its code of conduct and security policies. The company declined to comment further on this story.

To many who work in AI ethics, Gebru’s sudden ouster and its continuing fallout have been a shock but not a surprise. It is a stark reminder of the extent to which Big Tech dominates their field. A handful of giant companies are able to use their money to direct the conversation around AI, determine which ideas get financial support and decide who gets to be in the room to create and critique the technology.

At stake is the equitable development of a technology that already underpins many of our most important automated systems.

From credit scoring and criminal sentencing to healthcare access and even whether you get a job interview or not, AI algorithms are making life-altering decisions with no oversight or transparency.

The harms these models cause when deployed in the world are increasingly apparent: discriminatory hiring systems; racial profiling platforms targeting minority ethnic groups; racist predictive-policing dashboards. At least three Black men have been falsely arrested due to biased facial recognition technology.

For AI to work in the best interest of all members of society, the power dynamics across the industry must change. The people most likely to be harmed by algorithms – those in marginalised communities – need a say in AI’s development.

“If the right people are not at the table, it’s not going to work,” Gebru says. “And in order for the right people to be at the table, they have to have power.”

Big Tech’s influence over AI ethics is near total. It begins with companies’ ability to lure top minds to industry research labs with prestige, computational resources and in-house data, and cold hard cash.

Many leading ethical-AI researchers are ensconced within Big Tech, at labs such as the one Gebru and Mitchell used to lead. Gebru herself came from Microsoft Research before landing at Google.

And though Google has gutted the leadership of its AI ethics team, other tech giants continue building up their own versions. Microsoft, for one, now has a Chief Responsible AI officer and claims it is operationalising its AI principles.

But as Gebru’s own experience demonstrates, is not clear that in-house AI ethics researchers have much say in what their employers are developing. Indeed, Reuters reported in December that Google has, in several instances, told researchers to “strike a positive tone” in their papers’ references to Google products.

Large tech companies tend to be more focused on shipping products quickly and developing new algorithms to maintain their supremacy than on understanding the potential impacts of their AI. That’s why many experts believe that Big Tech’s investments in AI ethics are little more than PR.

“This is bigger than just Timnit,” says Safiya Noble, professor at UCLA and the cofounder and codirector of the Center for Critical Internet Inquiry. “This is about an industry broadly that is predicated upon extraction and exploitation and that does everything it can to obfuscate that.”

The industry’s power isn’t just potent within its own walls; that dominance extends throughout academia and the nonprofit world, to a chilling degree. A 2020 study found that at four top universities, more than half of AI ethics researchers whose funding sources are known have accepted money from a tech giant.

One of the largest pools of money dedicated to AI ethics is a joint grant funded by the National Science Foundation and Amazon, presenting a classic conflict of interest.

“Amazon has a lot to lose from some of the suggestions that are coming out of the ethics-in-AI community,” points out Rediet Abebe, an incoming computer science professor at UC Berkeley who cofounded the organisation Black in AI with Gebru to provide support for Black researchers in an overwhelmingly white field.

Perhaps unsurprisingly, 9 out of the 10 principal investigators in the first group to be awarded NSF-Amazon grant money are male, and all are white or Asian. Amazon did not respond to a request for comment.

“When [Big Tech’s] money is handed off to these other institutions, whether it’s large research-based universities or small and large nonprofits, it is those in power dictating how that money gets spent, whose work and ideas get resources,” says Rashida Richardson, the former director of policy at AI ethics think thank AI Now and an incoming professor of law and political science at Northeastern Law School.

It doesn’t help that people in academia and industry are “playing in the same sandbox,” says Meredith Broussard, a data journalism professor at NYU. Researchers move freely between Big Tech and academia; after all, the best-paying jobs for anyone interested in the problems of ethical technology are at the companies developing AI.

That sandbox often takes the form of conferences – one of the primary ways that researchers in this space come together to share their work and collaborate. Big Tech companies are a pervasive presence at these events, including the ACM Conference on Fairness, Accountability, and Transparency (FAccT), which Mitchell cochairs (Gebru was previously on the executive committee and remains involved with the conference).

This year’s FAccT, which begins in March, is sponsored by Google, Facebook, IBM, and Microsoft, among others. And although the event forbids sponsors to influence content, most conferences don’t have such clear policies.

  • A Nature magazine report
About author

Your email address will not be published. Required fields are marked *