Content Filtering Is Broken. Here's What Has to Change.
March 28, 2026 · @Filter Team
There's a problem with the way browsers handle content today.
Not the content itself — the architecture. The fundamental mechanism by which a browser decides what to show you, and what tools exist to change that decision.
Ad blockers have been the dominant answer for over a decade. They're good at what they do. But they were designed to solve a specific problem — commercial advertising — and that constraint is baked into how they work. The result is a tool that's increasingly mismatched for what people actually need.
What Ad Blockers Actually Do
Most content blockers work at one of two layers:
Network layer: Block requests before they leave the browser. This stops ads from loading at all, which is good for performance. But it requires maintaining enormous blocklists of known ad server domains, and it can't do anything about content that's served from the same domain as the page itself.
CSS layer: Hide elements after the page loads by injecting stylesheets that set display: none on matched elements. This works for layout-injected ads but leaves blank spaces where the content was, and the content still loaded — it's just invisible.
Both approaches share the same limitation: they're reactive. The content has already been fetched, parsed, and partially rendered by the time the filter acts.
The Blank Space Problem
If you've ever used a content blocker and noticed the empty boxes where something used to be, you've encountered the worst user experience artifact in the space. The browser laid out the page with certain elements. The blocker hid them. But the browser doesn't know they're gone, so the layout doesn't reflow.
This matters more than it might seem. Visual artifacts signal to the site that blocking is happening. They disrupt reading flow. And they train users to associate content filtering with a degraded experience — not a clean one.
DOM Mutation Interception: A Different Model
The DOM mutation observation API was built for a different purpose — tracking changes to the document for accessibility tools and reactive frameworks. But it turns out to be exactly the right primitive for content filtering done correctly.
Here's how it works:
When a browser parses HTML, it builds the DOM tree incrementally. Each time a new node is added to the document, a MutationObserver callback fires synchronously before the browser has completed its layout pass. This is the window: before layout, before paint, before the element has ever occupied space on screen.
@Filter operates in this window. When a new element appears in the DOM, we evaluate it against the user's filter rules. If it matches, we remove it before layout begins. The page never allocates space for it. There's no blank box. There's no reflow artifact. The content simply never existed, as far as the rendered page is concerned.
We call this pre-layout suppression.
Layout Reconciliation
Pre-layout suppression handles most cases cleanly. But modern pages are complicated. Some content is injected asynchronously, after initial paint. Some layout is calculated from element dimensions that change when content is removed. Some pages have complex grid or flex containers that respond differently to missing nodes than to hidden ones.
For these cases, we built what we call the Layout Reconciliation Module (LRM). After suppression, the LRM analyzes the surrounding layout context and makes targeted adjustments to ensure the resulting page looks like it was never supposed to have that content in the first place. Not like the content was removed — like it was never there.
This is the difference between blocking and filtering. Blocking breaks pages. Filtering improves them.
Why "Any Content" Matters
Ad blockers are purpose-built for advertisements. Their blocklists, their community support infrastructure, their entire model is oriented around a specific content type from a specific class of sources.
But users don't only want to filter ads. They want to filter:
- Political content during election cycles
- Sports scores when they're watching the game later
- Financial news that triggers anxiety without being actionable
- Real estate prices in markets they can't afford
- Job postings that don't match their criteria
- Product recommendations from brands they've had bad experiences with
None of this is addressable by an ad blocker, because none of it is an ad. It's just content — content that the user would prefer not to see.
@Filter treats all content as equally filterable. The keyword engine doesn't care whether the matched element is a banner ad or a news headline or a product listing. If the user has expressed a preference to suppress it, it gets suppressed — cleanly, at the DOM level, before layout.
The Data This Generates
Here's the part that wasn't obvious when we started building this.
When you have a large population of users independently deciding what content to suppress — with no social dynamics, no algorithmic pressure, no trending signals influencing their choices — you have something genuinely new: a direct signal of what people, left to their own devices, actually choose to remove from their experience.
This isn't sentiment analysis. Sentiment analysis asks "what do people feel about X?" This is behavioral data: "what are people independently choosing not to see?"
The difference matters. What you say you feel and what you actually filter are different. People are more honest with their browser than with a survey.
@Map™ aggregates this signal at a geographic level, using differential privacy to protect individual behavior while revealing collective patterns. The result is a visualization of content rejection — what topics, keywords, and narratives people are independently choosing to remove from their internet experience, by region, in near-real time.
What We Built
@Filter is the product of trying to answer a simple question: if you could build a content filtering system from scratch, knowing everything we know now about the DOM, about user behavior, about privacy-preserving data aggregation — what would you build?
The answer we arrived at:
- Intercept at the mutation level, before layout, for clean removal
- Reconcile the layout to eliminate artifacts
- Keep all processing local, so no browsing data ever leaves the device
- Aggregate the signal at a population level, with differential privacy, to create something useful for understanding collective content preferences
Two patent applications cover the core techniques. The extension is free. The data layer is what we're building a business around.
We're not trying to be a better ad blocker. We're trying to build the right foundation for content filtering — one that works on any content, on any site, without the visual artifacts and privacy compromises that have become normalized in this space.
The old model is broken. We think we know what to replace it with.
→ Download @Filter — available for Chrome, Firefox, and Edge.