Big Tech focuses more on engineered fixes rather than how AI exacerbate biases

Big Tech focuses more on engineered fixes rather than how AI exacerbate biases


The most prestigious machine learning conference, NeurIPS, has had at least two Big Tech companies as primary sponsors since 2015, according to the same 2020 study that analysed the influence of Big Tech money in universities.

“When considering workshops [at NeurIPS] relating to ethics or fairness, all but one has at least one organiser who is affiliated or was recently affiliated with Big Tech,” write the paper’s authors, Mohamed Abdalla of the University of Toronto and Moustafa Abdalla of Harvard Medical School.

“By controlling the agenda of such workshops, Big Tech controls the discussions and can shift the types of questions being asked.”

One clear way that Big Tech steers the conversation: by supporting research that’s focused on engineered fixes to the problems of AI bias and fairness, rather than work that critically examines how AI models could exacerbate inequalities.

Tech companies “throw their weight behind engineered solutions to what are social problems,” says Ali Alkhatib, a research fellow at the Center for Applied Data Ethics at the University of San Francisco.

Google’s main critique of Timnit Gebru’s peer-reviewed paper – and the company’s purported reason for asking her to retract it – was that she did not reference enough of the technical solutions to the challenges of AI bias and outsized carbon emissions that she and her co-authors explored.

Gebru – a giant in the world of AI was co-leader of Google’s AI ethics team before she was pushed out of her job in December.

The paper, called “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” will be published at this year’s FAccT conference with the names of several co-authors who still work at Google removed.

When Deborah Raji was an engineering student at the University of Toronto in 2017, she attended her first machine learning research conference. One thing stood out to her: Of the roughly 8,000 attendees, less than 100 were Black. Fortunately, one of them was Gebru.

“I can say definitively I would not be in the field today if it wasn’t for [her organization] Black in AI,” Raji says. Since then, she has worked closely with Gebru and researcher-activist Joy Buolamwini, founder of the Algorithmic Justice League, on ground-breaking reports that found gender and racial bias in commercially available facial recognition technology. Today, Raji is a fellow at Mozilla focusing on AI accountability.

The field of AI ethics, like much of the rest of AI, has a serious diversity problem. While tech companies don’t release granular diversity numbers for their different units, Black employees are underrepresented across tech, and even more so in technical positions.

Gebru has said she was the first Black woman to be hired as a research scientist at Google, and she and Mitchell had a reputation for building the most diverse team at Google Research. It’s not clear that the inclusion they fostered extends beyond the ethical AI team.

This workplace homogeneity doesn’t just impact careers; it creates an environment where it becomes impossible to build technology that works for everyone.

 “The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity,” Mohamed Abdalla and Moustafa Abdalla; The World Economic Forum’s Global Gender Gap Report 2018; Element AI, Global AI Talent Report 2019; Artificial Intelligence Index 2018]

Academic institutions and non-profits that focus on AI and ethics suffer from similar problems with inclusion. When Stanford’s Institute for Human-Centered AI (HAI) was announced in 2019, the academics affiliated with it were overwhelmingly white and male, and not a single one was Black. (The institute, which receives funding from Google, has since added more diverse faculty to its staff).

HAI’s codirector is AI luminary Fei-Fei Li, who was Gebru’s adviser for her doctoral research at Stanford. Li has not spoken publicly about Gebru’s ouster and declined to comment for this story.

Meanwhile, a new analysis of 30 top organisations that work on responsible AI – including Stanford HAI, AI Now, Data & Society, and Partnership on AI – reveals that of the 94 people leading these institutions, only three are Black and 24 are women.

 “A lot of the discussions within the space are dominated by the big non-profit institutions, the elites of the world,” says Mia Shah-Dand, a former Google community group manager turned entrepreneur and activist who did the analysis through her non-profit, Women in AI Ethics.

“A handful of white men wield significant influence over millions and potentially billions in AI Ethics funding in this non-profit ecosystem, which is eerily like the overall AI tech for-profit ecosystem,” Shah-Dand writes in her report.

This widespread lack of diversity – and the siloing of AI ethics within elite institutions – has resulted in a disconnect between research and the communities that are actually impacted by technology. AI ethics researchers are often focused on finding technical ways of “de-biasing” algorithms or mathematical notions of fairness.

“It became a computer-science-y problem area instead of something that is connected and rooted in the world,” says Emily Bender, a professor of linguistics at University of Washington and Gebru’s co-author for “On the Dangers of Stochastic Parrots.”

Bender, Gebru, and others say it is important to empower researchers who are focused on AI’s impacts on people, particularly marginalised groups. Even better, institutions should be funding researchers who are from these marginalised groups. It is the only way to ensure that the technology is inclusive and safe for all members of society.

“There are people in the world who have been suffering from discrimination and marginalisation for generations, and [technology is] adding layers on top of that,” Bender says. “This is not just some long-term abstract problem we’re trying to solve. It is people being harmed now.”

“If you’ve never had to be on public assistance, you don’t understand surveillance,” says Yeshimabeit Milner, the cofounder and executive director of the non-profit Data 4 Black Lives, which examines how the data that undergirds AI systems is disproportionately wielded against marginalized communities. Facial recognition surveillance, for example, has been used in policing, public housing, and even certain schools.

Milner is part of a growing group of activists and researchers intent on documenting how AI and data affect real people’s lives and using that research to push for change.

Efforts such as the Algorithmic Justice League (AJL), the Detroit Community Technology Project and the Our Data Bodies Project use community organising and education to help people harmed by algorithms, compel companies to amend their technology, and push for AI regulation.

Our Data Bodies, for example, has embedded community researchers in cities such as Charlotte, Detroit, and Los Angeles.

“Across all three cities, [community members] felt like their data was being extracted from them, not for their benefit, but a lot of times for their detriment – that these systems were integrating with one another and targeting and tracking people,” says Tawana Petty, a Detroit-based activist who worked with the Our Data Bodies Project. She is now the national organizing director of Data 4 Black Lives.

These organisations have made some progress. Thanks to the work of the AJL and others, several prominent companies including IBM and Amazon either changed their facial recognition algorithms or issued moratoria on selling them to police in 2020, and bans on police use of facial recognition technology have been spreading across the country.

The Stop LAPD Spying Coalition sued the LAPD for not releasing information about its predictive policing tactics and won a victory in 2019, when the department was forced to expose which individuals and neighbourhoods had been targeted with the technology.

In Detroit, Petty has been tracking the rise of the city’s Project Green Light, which began in 2016 when local law enforcement installed a handful of facial recognition-powered cameras; the program has now expanded to more than 2,000 cameras.

She’s focused now on creating a bill of rights for Detroit residents that would change the city’s charter. “We’re hoping to get a ban on face recognition,” she says. “If we succeed, we’ll be the first predominantly Black city to do so.”

“This activist work is exactly what we need to do,” says Cathy O’Neil, data scientist and founder of the algorithmic auditing consultancy ORCAA. She credits activists with changing the conversation so that AI bias is “a human problem, rather than some kind of a technical glitch.”

It is no coincidence that Black women are leading many of the most effective efforts. “I find that Black women as people have dealt with these stereotypes their entire lives and experienced products not working for them,” says Raji. “You’re so close to the danger that you feel incredibly motivated and eager to address the issue.”

  • A Wired report
About author

Your email address will not be published. Required fields are marked *