X is experimenting with a new way to provide increased transparency over account restrictions, with a new alert that will let users know if and when their account has been impacted due to falling foul of its sensitive content rules.
As you can see in this example, posted by X designer Andrea Conway, X may soon alert users in their Notifications tab when any restriction on their content has been actioned.
As per the wording in the above example (which Conway notes may change before roll out):
“We have found that your account potentially contains sensitive media – such as graphic, violent, nudity, sexual behavior, hateful symbols, or other sensitive content. We allow sensitive media on X, as long as it doesn’t break out sensitive media policy. If you want to share sensitive content, mark your media as sensitive.”
The explainer then outlines what will happen to your account from then on, which includes potential reach restrictions for all of your posts as a result of this finding.
So it’s essentially an alert on potential shadowbans, which have become a key focus for many free speech enthusiasts who feel that they’re not getting the reach that they should with their posts.
Various investigations have found that shadowbans, in varying forms, have been implemented by most social media platforms at some stage, with users generally left unaware that the reach of their content has been restricted due to violations of rules or guidelines. X owner Elon Musk has vowed to provide more transparency on such, and with X’s new “Freedom of Speech, Not Reach” approach leaning more on reach restrictions than content removals, this new approach is the next step in facilitating that oversight, and keeping users informed of any restrictive actions.
Which is a good move, which will help to keep users aware of any rulings on their content. But it could also lead to more disputes, which the X team will also have to manage as a result of letting people know that their content, and speech, has been restricted.
This is an interesting gray area in Musk’s approach, and his interpretation of ‘free speech’ as a concept. In Musk’s view, X will be able to both meet local requirements, and ad industry expectations, around moderation by limiting the reach of offensive posts, which will also keep users happy because it won’t require the removal of posts outright, in line with Musk’s free speech principles.
But restricting the reach of posts will be seen by many free speech advocates as being just as oppressive as removing them outright, while those who are also earning money from their X posts, via its new ad revenue share program, will also undoubtedly be upset when any of their posts get restricted.
It’s also interesting to note that this warning prompt, in its current form at least, points to account-based penalties, as opposed to content-specific restrictions, which could also raise the hackles of many users. If a user is notified of such, that’ll have a big impact on their overall post performance, as it’s not a single warning on a single post (which X implemented back in April), but a whole profile hit.
It’s a good update, but it could come with further challenges for the X team as users are informed of potential penalties. X may need to create more specific notifiers to streamline the system, which, in fairness, it is likely also working in its broader transparency effort.