AI: Balancing criminal legal system with rising uptake of new technology

AI: Balancing criminal legal system with rising uptake of new technology

0

Training more people on the basics of using data is part of Yeshimabeit Milner’smission to ensure tech is built equitably – and to bring into the conversation more people who have experienced algorithmic harms.

Milner is cofounder and executive director of the non-profit Data 4 Black Lives, which examines how the data that undergirds Artificial Intelligence systems is disproportionately wielded against marginalised communities. Facial recognition surveillance, for example, has been used in policing, public housing, and even certain schools.

“If we’re going to build a new risk assessment algorithm, we should definitely have somebody [in the room] who actually knows what it is like to move through the criminal legal system,” she says. “The people who are really the experts – in addition to amazing folks like Dr Timnit Gebru – are people who don’t have PhDs and who will never step foot in MIT or Harvard.”

Gebru is a leading figure in the AI f and field and former co-lead of Google’s AI ethics team, who was pushed out of her job last December. Gebru had been fighting with the company over a research paper that she had co-authored, which explored the risks of the AI models that the search giant uses to power its core products – the models are involved in almost every English query on Google, for instance.

For many researchers and advocates, the ultimate goal isn’t to raise the alarm when Artificial Intelligence (AI) goes wrong, but to prevent biased and discriminatory systems from being built and released into the world in the first place.

Given Big Tech’s overwhelming power, some believe that regulation may be the only truly effective bulwark to halt the implementation of destructive AI systems.

“Right now, there’s really nothing that prevents any type of technology from being deployed in any scenario,” Gebru says.

The tide may be turning. The Democrat-controlled Congress will likely reconsider a new version of the Algorithmic Accountability Act, first introduced in 2019, which would force companies to analyse the impact of automated decision-making systems.

The sponsors of that bill, including Senators Cory Booker and Ron Wyden and Congresswoman Yvette Clarke sent a letter to Google chief executive Sundar Pichai after Gebru was forced out, raising concerns about Google’s treatment of Gebru, highlighting the company’s influence in the research community, and questioning Google’s commitment to mitigating the harms of its AI.

Clarke says that regulation can prevent inequalities from getting “hardened and baked into” AI decision-making tools, such as whether or not someone can rent an apartment. One critique of the original Algorithmic Accountability Act was that it didn’t have the teeth to truly prevent AI bias from harming people.

Clarke says her goal is to beef up the enforcement powers of the FTC “so another generation doesn’t come into [using technology] with the bias already baked in.”

Antitrust lawsuits could help change the balance of power for small businesses as well. Last year, the House Judiciary Committee called out how Big Tech uses its monopolistic control of the data required to train sophisticated AI to suppress small companies.

“We [shouldn’t] take the power of these companies as a given,” Milner says. “It’s about questioning that power and trying to find these creative policy solutions.”

Other proposals for regulation include starting an FDA-like entity to create standards for algorithms and address data privacy, and raising taxes on companies to fund more independent research.

UCLA’s Noble believes that by not paying their fair share in taxes, tech companies have starved the government in California, so that public research universities such as her own simply do not have enough resources.

“This is part of the [reason] why they’ve had a monopoly on the discourse about what their tech is doing,” she says. There’s some precedent for this: Our Data Bodies originally received funding as part of the 2009 settlement of a lawsuit against Facebook for its Beacon program, which shared people’s purchases and internet history on the platform.

“Regulation doesn’t come out of nowhere, though,” says Meredith Whittaker, a prominent voice for tech-worker organizing. Whittaker helped put together the 2018 Google Walkout and is the cofounder and director of AI Now.

“I think we do need strong, organized social movements to push for the kind of regulation that would actually remediate these harms.”

Indeed, worker activism has been one of the few mechanisms to force change at tech companies, from Google walking away from its drone image analysis project Maven to ending the practice of forced arbitration in sexual harassment cases.

Individuals within Big Tech firms can not only protest when their products are going to be used for ill, they can push to ensure diverse teams are building these products and auditing them for bias in the first place – and band together when ethicists such as Gebru and Mitchell face retaliation.

  • A Nature report
About author

Your email address will not be published. Required fields are marked *