Meta’s Oversight Board has always been an experiment, an example of how external, independent oversight of social platform moderation decisions could provide a more equitable way forward for social media apps.
Yet, four years on, it doesn’t seem like anybody else is going to take up the cause, despite the Oversight Board influencing various Meta policies and outcomes, which have improved the company’s systems for dealing with common issues and concerns.
Which again underlines why social platform moderation is hard, and without uniform rules in place, to which all platforms need to adhere, the process will continue to be a mishmash of concepts, with varying levels of effect.
Today, under the cloud of recent funding cuts, the Oversight Board has published its annual report, which shows how its decisions have impacted Meta policies, and what it’s been able to achieve, on a small scale, in the social moderation space.
As per the Board:
“2023 was a year of impact and innovation for the Board. Our recommendations continued to improve how people experience Meta’s platforms and, by publishing more decisions in new formats, we tackled more hard questions of content moderation than ever before. From protest slogans in Iran to criticism of gender-based violence, our decisions continued to protect important voices on Facebook and Instagram.”
Indeed, according to the Oversight Board, it issued more than 50 decisions in 2023, overturning Meta’s original decision in around 90% of cases.
Which, at Meta’s scale, really isn’t that much. But still, it’s something, and those decisions have had an impact on Meta’s broader policies.
Yet, even so, the Board is only able to operate at a small scale, and demand for reviews of Meta’s moderation decisions remains high.
As detailed here, the Board received almost 400k appeals in 2023, but was only able to provide 53 decisions. Now, that’s not a direct comparison of impact, as such, because as the Board notes, it aims to hear cases that will have broader relevance, and thus, any changes made as a result will reach beyond that case in isolation. For example, a change in policy could impact thousands of these cases, and see them resolved, or addressed, without having to hear them individually.
But even so, 400k appeals, four years in, shows that there’s clearly demand for an umpire or arbitrator of some sort to hear appeals against platform moderation decisions.
Which is the whole point of the Oversight Board project, in that it’s supposed to show regulators that an external appeals process is needed, in order to take these decisions out of the hands of Meta management. Yet no one seems to want to push this case. Lawmakers and regulators continue to hold committee hearings and reviews, but there’s been no significant push to create a broader, more universal ruling body over digital platform decisions.
That still seems like the better, more equitable path, yet at the same time, you would also effectively need bodies of this type in every region, in order to cater for different legal regulations and approaches.
That seems unlikely, so while the Oversight Board has seemingly proven its use case, and the value of having independent review on moderation calls and processes, it seems unlikely to change broader approaches to such from government-appointed groups.
And with the Board losing funding, and scaling back, it seems like eventually it will be gone as well, leaving these decisions solely in the hands of platform management. Which everyone will complain about, and CEOs will continue to be hauled before congress every six months or so to answer for their failures.
Yet, the solution is seemingly too complex, or too risky to implement. So we’ll just rely on fines and public shaming to keep the platforms in line, which traditionally hasn’t been effective.
And in the fast evolving age of AI, this seems like an even less workable situation, but again, despite the Oversight Board showing the way, no one seems to be taking up the mantle as yet.
You can check out the Oversight Board’s full 2023 report here.