- The Social Pulse by Social Chime
- Posts
- Meta Claims Community Notes Are Working – But There’s a Catch
Meta Claims Community Notes Are Working – But There’s a Catch
Meta is giving itself a pat on the back this week, saying that its new community-driven moderation system is paying off. In its latest transparency report, Meta revealed a 50% drop in content moderation “mistakes” in the U.S. since it scrapped third-party fact-checkers and shifted to a Twitter-style “Community Notes” system earlier this year. In other words, Meta halved the number of posts it was wrongfully taking down, by its own estimation. Considering the company used to remove millions of posts per day (and admitted 10–20% of those might have been mistakes), a 50% error reduction is no small feat. Meta’s verdict: letting users add context to questionable posts is making moderation more. Cue the self-five from Menlo Park.
Of course, in social media, every “solution” comes with side effects. Meta’s looser approach means more borderline content is staying up. The same report notes a small but notable uptick in bullying, harassment and violent content on Facebook – think on the order of 6 out of 10,000 views rising to 9 out of 10,000. Meta frames this as the flip side of reducing false positives: by pulling back on aggressive removal, a bit more offensive stuff slips through. And while Community Notes (now on Facebook, Instagram, and Threads) draw on “the wisdom of the crowd” for fact-checking, they’re not foolproof. Over on X (formerly Twitter), around 85% of Community Notes suggestions never even see the light of day due to strict consensus rules. In short, crowdsourced truth-checking is hard – most of the crowd’s notes get stuck in purgatory. Meta’s takeaway: fewer wrongful removals is a win; skeptics’ takeaway: leaving more toxic content up and hoping users annotate it might be… less of a win.
For one, you might breathe a little easier knowing a sarcastic post or edgy ad campaign is less likely to be zapped by overzealous AI filters. Fewer posts wrongly removed = fewer “Oops, our bad” emails from Meta. But the job isn’t getting easier: more borderline content floating around means staying vigilant about brand safety. Misleading claims or hot-button topics might linger longer on platforms – potentially with a Community Note attached for context. If your brand gets context-ed (is that a verb yet?), you’ll need to decide how to respond. The bottom line: Meta is entrusting moderation partly to the users, so the “community” may occasionally fact-check your posts or those around you. Keep an eye on those notes, engage transparently if needed, and as always, have a crisis plan ready. Meta’s new mantra may be “more speech, fewer mistakes”, but for social pros the mantra remains “hope for the best, prepare for the worst (tweet).” 😉
What We’re Reading
TikTok Rolls Out AI-Powered Smart Keyword Filters – TikTok introduced new AI-driven keyword filters that let users automatically limit content they don’t want to see on their For You page, reports TechCrunch. The “Smart Filters” will even block synonyms of your chosen keywords (filter “remodeling” and you’ll also filter “renovation”), helping people fine-tune their feeds.
Snap Adds New Tools for Building Bitmoji Games – Snapchat is doubling down on AR and interactive content, reports TechCrunch. The company launched Lens Studio 5.10 with new features for creators to develop Bitmoji-based mini-games inside Snapchat. The update includes a Bitmoji Suite (think custom outfits and props for your avatar) and turn-based game templates, signaling Snap’s push into more playful, friend-challenge.
YouTube Introduces Side-by-Side Ads for Livestreams – To boost creator monetization, YouTube rolled out a new ad format for live broadcasts that doesn’t interrupt the stream, reports Social Samosa. Side-by-side mid-roll ads will now appear in a split-screen view (on web and connected TVs), with the stream playing in a minimized window next to the ad. The idea is to keep viewers engaged during ad breaks – a win-win for streamers and.
Instagram Expands Comment Filters for Big Creators – Instagram is giving high-profile creators a moderation upgrade, reports Social Samosa. Accounts with over 100,000 followers now get access to new comment filtering and sorting tools to better manage the deluge of replies. (All users already could filter comments by verified status or following, but now mega-creators have even more control.) For social managers, this means if you’re running a large account, it just got a bit easier to tame the comment chaos.
Bluesky Opens Verification to Notable Users – Decentralized Twitter contender Bluesky is expanding its in-app verification program, reports Social Samosa. More “notable” users – those with recognized achievements or press presence – can now get the blue badge on Bluesky. The goal is to help trusted voices stand out and build trust on the fledgling platform. It’s an interesting move in the post-Twitter landscape, as invite-only Bluesky tries to scale up (and maybe lure disenchanted tweeters).
Want even more digital media news? Stay up to date with our flagship newsletter, Media Minds.
|

At Social Chime, we’re driven by a singular mission – to harmonize the world of social media management. We understand the significance of your brand’s voice, ensuring it resonates clearly, authentically, and timely across every digital interaction. Connection is at the core of what we do, and our tools are meticulously crafted to make every engagement count.
📲 Keep up with Social Chime on LinkedIn, Instagram, X and Facebook. Send comments and suggestions to [email protected].
💭 At Social Chime, we believe in harmony in every post. We seamlessly blend powerful features with intuitive design to simplify your social strategy and amplify your online impact. Learn More.