
Kenya’s High Court has ruled that Meta, Facebook’s parent company, can be sued in Kenya for allegedly promoting content that contributed to ethnic violence in neighbouring Ethiopia.
This ruling, stemming from the 2020-2022 Tigray conflict in Ethiopia, could set a crucial precedent for global companies and their role in content moderation.
Why It Matters
The plaintiffs argue that Facebook’s recommendation algorithms played a significant role in amplifying violent posts, which heightened tensions during the Tigray conflict.
They aim to hold Meta accountable for its part in the violence and demand changes in the platform’s moderation practices.
This case challenges Meta’s previous stance that local courts have no jurisdiction over global platforms like Facebook, highlighting the growing need for accountability in the digital age.
What’s Next
The plaintiffs are calling for Meta to create a restitution fund for victims of violence linked to the platform and to alter Facebook’s algorithm to curb the promotion of hate speech.
This case could have far-reaching effects, potentially forcing Meta to reconsider its content moderation policies, especially in conflict zones. It may also prompt other countries to demand greater responsibility from tech companies regarding harmful content.
Taking You Back
This case is not the first instance of controversy surrounding Meta in Kenya.
In 2023, 260 content moderators employed by a Meta contractor were laid off, allegedly because of unionizing efforts to improve working conditions.
It was ruled that the company could be sued.
The Way Forward
Looking ahead, Emmanuel Adinkra, President of the Ghana Internet Safety Foundation, emphasized the need for tech companies to adopt a hybrid system for content moderation.
Speaking on BBC News, he noted the limitations of automation in detecting inappropriate content. He advocates for a model that combines human moderators, who can understand local contexts, with community-based tools to improve reach, speed, and public trust.
“The move away from a human fact check my major tech companies has exposed gaps in content governance,” he said.
“I advocate for a hybrid model combining trained human moderators who can understand the local context with community-based tools to enhance reach, speed, and public trust,” he added.