On February 18, 2025 the Dutch College for Human Rights (College voor de Rechten van de Mens) issued Opinion 2025-17 finding that Meta Platforms Ireland Ltd. engaged in indirect discrimination based on gender when displaying job advertisements to Facebook users in the Netherlands. The assessment was sought by the Clara Wichmann Foundation (“CWF”), a foundation advocating for women’s rights and gender equality in the Netherlands, in collaboration with the research and campaign organization based out of England, Global Witness. This finding follows a similar decision in the United States from last year where the Justice Department (DOJ) sued Facebook over allegations that housing ads on the platform discriminated against race, sex and disability. Facebook agreed to a settlement that required the company to develop a new system to address the discrimination in its algorithm for delivering housing ads to users. In a move likely to create more complexity and conflict in rectifying the findings coming out of both the DOJ and Dutch College for Human Rights, Meta announced it would end its internal Diversity, Equity, and Inclusion program and lower the threshold for what is considered hate speech and discrimination on the platform—including expressions related to gender.
Opinion 2025-17 under the lens
The overarching question asked by the CWF and Global Witness was whether Meta Ireland, responsible for offering Facebook and advertisements in Europe, engaged in indirect discrimination based on gender and age when displaying job advertisements to Facebook users in the Netherlands, specifically focusing on whether the advertising algorithm discriminated unlawfully. To demonstrate this, between 2022 and 2023, Global Witness posted a range of real-life job ads on Facebook in various countries, including France and the Netherlands. Across these countries Facebook’s algorithm resulted in 90.9% of ads for roles stereotypically filled by men, like mechanics, being seen by men, while 78.6% of ads for roles typically filled by women, like preschool teachers, were seen by women.
Image source
This research revealed a clear gender bias in Meta’s advertising algorithm, which determines to whom advertisements are shown, and Meta was unable to rebut the presumption of discrimination. Since the algorithm learns from users’ click behavior, a biased user profile can emerge, potentially reinforcing stereotyping if not monitored. Meta Ireland acknowledged that gender data can be part of the algorithm but did not refute that this data point could promote stereotyping through the algorithm. The College determined that this constitutes prohibited indirect gender-based discrimination, where a seemingly neutral practice disproportionately affects individuals of a particular gender.
Meta Policy Changes
Both the CWF and Global Witness are considering potential next steps and are open to collaborating with Meta on developing better, fairer, and less biased processes and systems. This kind of dialogue between Meta Ireland, the CWF, and Global Witness would be consistent with the commitment Facebook made last year in its settlement with the DOJ in the United States as referenced above. When approached for comment after the DOJ issued its decision, a spokesperson from Meta said: “We have applied targeting restrictions to advertisers when setting up campaigns for employment, as well as housing and credit ads, and we offer transparency about these ads in our Ad Library. We do not allow advertisers to target these ads based on gender. We continue to work with stakeholders and experts across academia, human rights groups and other disciplines on the best ways to study and address algorithmic fairness.”
Recently, however, Meta took a much more sweeping and a considerably contradictory approach. On Tuesday, January 7, 2025, Meta announced changes to how it moderates content, including doing away with professional fact checking as well as quietly updating its hateful conduct policy, adding new types of content users can post on the platform, effective immediately. Joel Kaplan, Meta’s Chief Global Affairs Officer, published an extensive blogpost to the Meta newsroom giving more context to Meta’s substantial changes in various policy measures. Kaplan’s rationale hinges on expanding and protecting free speech, in a similar vein to arguments and assertions made by Elon Musk, explaining that “as well-intentioned as Meta’s efforts have been to fact-check and curb hate speech, they have expanded over time to the point where we are making too many mistakes, frustrating our users and too often getting in the way of the free expression we set out to enable.” Kaplan went into further detail outlining that Meta is getting rid of a number of restrictions on topics like immigration, gender identity and gender that are the subject of frequent political discourse and debate. His rationale for these changes is that “it is not right that things can be said on TV or the floor of Congress, but not on our platforms.”
Implications/Possible Outcomes
Meta’s move to loosen the reins (or let go of them entirely depending on one’s perspective) comes at a politically charged time where some are concerned that too much deregulation could potentially expose citizens to the discrimination and hate speech that agencies, politicians, academics, and activists have spent years advocating against and working to eradicate. Meta states that the protections and processes that remain are sufficient in protecting against more high-risk violations while allowing for more free expression by its users. Kaplan at Meta explained they will still use automated systems to scan for “illegal and high-severity violations, like terrorism, child sexual exploitation, drugs, fraud and scams” and that they have added extra staff to this work and in more cases, they are also now requiring multiple reviewers to reach a determination in order to take something down. In addition to more staff, they have started using AI large language models (LLMs) to provide a second opinion on some content before we take enforcement actions. While an interesting strategy, this could end up defeating the purpose of the work they are trying to do if the LLM, like the algorithms in contention in the cases outlined above, takes on certain biases and thus reinforces speech/content it should be taking down and mirrors the opinions of those it should be reviewing. Some might even say that these algorithms are doing their jobs as these are the types of ads that they want to be seeing.
The overarching concern, however, of groups like CWF and Global Witness is exacerbating the biases we see in society, narrowing opportunities for users, and frustrating progress and equity in the workplace and society at large. The efforts of NGOs and activists in cases like this coming out of Holland is to work against the sometimes insidious nature of unregulated Big Tech. The hope is that these algorithms can be leveraged to bring progress and equity to historically marginalized groups rather than entrenching them in racial stereotypes and regressive beliefs about their professional and personal capabilities based on gender.