Technology

Bluesky users mass-block new AI assistant Attie

Open block data shows 125000 blocks in days, decentralised moderation doubles as a referendum on platform direction

Images

The top 5 most blocked accounts on Bluesky, according to open source data collected by ClearSky, as of 3/30/25 at 12 PM ET.Image Credits:ClearSky The top 5 most blocked accounts on Bluesky, according to open source data collected by ClearSky, as of 3/30/25 at 12 PM ET.Image Credits:ClearSky Image Credits:ClearSky
Amanda Silberling Amanda Silberling techcrunch.com

Bluesky’s new AI assistant account, Attie, has become the second most blocked account on the platform—behind only U.S. Vice President J.D. Vance—within days of its launch, according to open-source block data cited by TechCrunch. Roughly 125,000 users have blocked Attie while the account has around 1,500 followers. The tool was introduced at Bluesky’s Atmosphere conference as a way for users to design their own algorithms and build custom feeds within the AT Protocol ecosystem.

The numbers illustrate a moderation dynamic that differs from the policy-first model of X and Meta. On Bluesky, blocks and shared blocklists function as a kind of crowd-sourced enforcement layer: users can rapidly quarantine an account or class of accounts without waiting for a centralized decision. When the target is a tool account launched by the platform itself, the blocks become a measurable referendum on product direction.

Attie also lands in a specific cultural moment. Bluesky grew to tens of millions of accounts by marketing itself as an alternative to X after Elon Musk’s overhaul, and many users came for a quieter social graph with fewer of the AI-driven features that now dominate mainstream platforms. TechCrunch notes that critics argue Bluesky still lacks basic functionality—like sending images via DMs—while shipping an AI product that many users did not ask for.

From Bluesky’s side, the pitch is that AI should help users control what they see rather than push engagement-maximizing ranking systems. But the platform’s own block metrics show the trust problem: even an “assistive” AI account can be treated as a spam vector, a content-laundering tool, or simply an unwanted symbol of the broader AI buildout.

The same mechanism that lets a network scale moderation through user action can also harden into informal censorship. Once blocklists become default infrastructure, they can be used to suppress unpopular speech as efficiently as they suppress bots—without the transparency requirements that come with formal policy.

Attie’s account is now blocked by more people than follow it, which is a data point no product team can A/B test away.