After years of abuse Kenyan tech workers take on Big Tech to champion rights for fellow Africans

After years of abuse Kenyan tech workers take on Big Tech to champion rights for fellow Africans


Mophat Okinyi hates remembering the job he used to do for ChatGPT. For about $1.50 an hour, he read hundreds of descriptions of paedophilia and incest for the artificial intelligence platform every day. As a quality analyst, his job was to confirm that his subordinates had read and classified potentially harmful content correctly.

“Some of these things are very shocking,” says Mr Okinyi. “We shouldn’t even talk about some of these texts.”

Tech giants like Facebook and YouTube have been accused of taking advantage of weaker labour laws in Africa. Now content moderators are using legal avenues to win fair pay and support.

The work left him so traumatised, he says, that he drifted apart from his family and eventually separated from his wife. “If you put so much dirty content in your mind, it changes you,” he says.

Now, Okinyi and some 150 other content moderators working for tech powerhouses, including Facebook and TikTok, hope to form a union to improve their pay and working conditions.

“We’re trying to make this job safe for those who will do it in future and those who are doing it right now,” says Okinyi.

Their decision to unionise shines a spotlight on the way tech giants use human labour in Africa where, through outsourcing, they hire hundreds of people to remove harmful content from their platforms. African content moderators hope to force tech corporations to provide adequate mental healthcare and fair pay for everyone who works for them – including non-traditional employees such as themselves.

“Unionisation signals that gig work rights are labour rights, and workers deserve the protections provided by law in this field,” says Nanjira Sambuli, a Nairobi-based tech and international affairs fellow at the Carnegie Endowment for International Peace.

Since last year, Facebook’s parent company, Meta, has been facing lawsuits brought by content moderators in Kenya accusing it of union busting, wrongful terminations and insufficient psychological support, among other infringements.

In one case, Meta claimed it was not the moderators’ employer, and was therefore not liable. The court ruled Meta was the “true employer” and the “owner of the digital work of content moderation.”

“That’s the most significant labour rights decision about content moderation I have seen from any court anywhere,” says Cori Crider, co-founder and director of Foxglove, a London-based non-profit that’s providing legal advice to the moderators. “If Facebook is held the true employer of these workers, then the days of hiding behind outsourcing to avoid responsibility for your critical safety workers are over.”

But the decision isn’t set in stone yet, as Meta awaits the outcome of an appeal. For years, tech platforms have faced intense criticism around the world for failing to filter divisive content. In Africa, Facebook came under fire last year for alleged inaction over hateful material that eventually incited violence during the war in northern Ethiopia. A study last month by Global Witness found extreme and hate-filled ads were approved by YouTube, Facebook and TikTok in South Africa, where xenophobic violence has flared up in recent years.

As a result, tech powerhouses have invested heavily in removing material including hate speech, misinformation and incitement to violence from their platforms. Many of the workers who undertake this vital but gruelling task are hired through outsourcing companies and are based in countries like Kenya, India and the Philippines, which supply quality labour at cheap prices.

In Nairobi, a regional tech hub, outsourcing companies bring talent from numerous African countries to moderate work in different African languages. They include Sama, a San Francisco-headquartered company, which has contracted workers for Facebook and ChatGPT. Majorel, headquartered in Luxembourg, hires labour for Facebook and TikTok.

Global tech companies believe that by outsourcing, they can escape responsibility, says Odanga Madung, a senior researcher at the Mozilla Foundation in Nairobi, whose work focuses on the impact of tech platforms in Africa.

“Irresponsibility has always been good business in the capitalist contexts,” he says. “Taking care of people is expensive, more so if you’re exposing them to graphic content on behalf of your users.”

Accusations of exploitation of content moderators are not unique to Africa. Moderators in the US and Ireland have in the past sued Facebook for mental health issues related to their work. In Germany, a Berlin-based trade union called Verdi has recently been helping content moderators for TikTok and Facebook to unionise.

But the move by African workers to form a union is a novel approach outside the West. At the top of members’ list is having regular, professional mental health checkups and having their pay standardised with those of their peers across the world.

After graduating from university, Okinyi joined Sama in Nairobi in 2019 for his first job. He worked on various projects for different foreign tech companies, doing data labelling, product classification and other tasks. But it was his content moderation work for ChatGPT, starting in 2021, that affected him in unforeseen ways.

ChatGPT is a chatbot that takes in a user’s question then, using a language model created from words from the web, provides an answer. Critics say its reliance on mining the internet makes it vulnerable to toxic material. For the six months that he worked on ChatGPT, Okinyi’s work began early, ended late, and left him emotionally drained.

Every day at work, he read some 700 texts about child sexual abuse and flagged them according to their severity. Over eight-hour shifts of reading and labelling this material, he enabled ChatGPT to filter out harmful requests.

Although Sama provided counsellors, Okinyi says, productivity demands at work meant he and other workers barely had time to see them.

OpenAI, ChatGPT’s developer, didn’t respond to a request for comment.

The situation was equally appalling for Facebook content moderators at Sama. Kauna Ibrahim, a Nigerian, spent four years watching hundreds of horrific videos every day at work, including sexual abuse and beheadings. For roughly three dollars an hour, she assessed whether the videos were in violation of Facebook’s policies. During her first year of work, she began suffering panic attacks.

“Some of the images never leave you. You find yourself unable to sleep. Sometimes you dream of what you have seen,” says Ibrahim, who was a graduate student in clinical psychology at the time. “But because you do it every day, you just survive.”

Sama’s therapists weren’t qualified and didn’t provide enough psychological support, Ibrahim says. So she resorted to seeking her own therapist. Sama says it provides “qualified and licensed” professionals to provide therapy for its workers, and that it uses an “internationally-recognised” methodology to set wages for its workers, making its pay “internationally comparable and locally specific.”

Meta declined to comment because of the ongoing lawsuits.

Ibrahim was among 260 workers whose contracts were terminated in March 2023 after Sama stopped doing work for Facebook. Sama’s work for ChatGPT ended in March 2022 and Okinyi later moved to Majoral to do customer service work for a European e-commerce company.

On Labour Day this year, both Okinyi and Ibrahim sat alongside about 150 other content moderators for Facebook, TikTok and ChatGPT in a Nairobi hotel, and voted to unionise. Okinyi understands the fight may not be over, but he’s willing to keep pushing with his colleagues to ensure their voices are heard.

“We want to be united because if we’re united, we become strong,” he says.

  • A Christian Science Monitor report
About author

Your email address will not be published. Required fields are marked *