Where Majority Rule Breaks Down and What Comes Next
Part 1 of a series on Bridging (finding agreement across disagreement) and how decisions get made online.
In recent years, content moderation across platforms has converged on a familiar pattern.
Detect -> Decide -> Enforce.
For clear violations, the loop collapses. AI decides and enforcement is automatic.
But for everything else -- i.e. the gray area -- this system struggles. That’s where intent, context, and disagreement matter more than rules.
Unsurprisingly, the hardest decisions are the non-obvious ones.
- Is this harmful, or just offensive?
- Is this spam, or just aggressive?
- Is this misleading, or just incomplete?
When there are no clear answers, true judgment calls must be made.
Historically, we’ve handled these close calls in two ways:
- Centralized authority (internal moderation teams, e.g. YouTube)
- Majority voting (community moderation, e.g. Reddit)
One concentrates power; the other distributes it. Both feel reasonable, but break in the same place.
Centralized systems struggle because they don’t scale well. Queues grow, costs rise, and decisions start to feel inconsistent. From the outside, they can feel opaque & slow, ultimately eroding trust.
On the flip side, majority systems work well when communities are aligned or questions are objective. But in contested environments, they start to break down. The loudest, or fastest, group dominates -- and agreement gets mistaken for accuracy.
In both cases, the close calls fall into a gap. Too subjective for AI, too expensive for experts, too contested for majority vote.
Over the past few years, a third approach has started to take shape.
X introduced Community Notes, with Meta and TikTok following with similar efforts. Not perfect, but meaningfully different.
Instead of asking what most people think, Community Notes asks a harder question: can people who usually disagree still agree on this?
While most systems reward alignment, bridging changes what counts as agreement. In a typical voting system, every vote counts the same. Get enough people to agree – even if they all think the same way – and something passes.
Majority Voting
Decisions by dominance
- Outcome is determined by the largest group
- Decisions are based on aggregate counts
- Minority views do not affect the final outcome
Decision reflects momentum, not agreement
Bridging
Decisions that hold across disagreement
- Different perspectives coexist within one community
- Outcomes require agreement across groups
- Cross-group support increases confidence
Decision reflects breadth of agreement across groups
Bridging doesn’t work like that. Agreement only counts if it comes from people who normally don’t agree. So if one group piles on, it doesn’t move the outcome. And if a note only appeals to one side, it’s unlikely to clear.
The system is effectively asking: does this hold up across disagreement – or just within it?
That constraint changes what gets written. Not partisan takes or vague summaries. Only context that different sides can independently agree is fair.
Over time, that means the system surfaces the least polarizing, most broadly acceptable context within a specific claim or topic – not just the most popular content overall.
This idea started with misinformation -- where claims can be debated, sources cited, and context added. It worked better than many expected. When notes appear, they have been shown to reduce the spread of misleading posts and even prompt those authors to revise or delete the posts.
But misinformation isn’t the main problem most platforms face. The real challenge is messier: close calls on harmful content, spam, and disputes. Not true or false, just unclear.
What if bridging applied here too?
Not just asking, “Is this claim correct?” But asking, “What is this content and what’s the right call?”
And if that’s the question, the process has to change too. Instead of a moderator deciding, or a majority voting, the goal becomes clear -> can people with different perspectives land on the same outcome?
This shifts how decisions are made. Instead of coming from authority or volume, they emerge from convergence across perspectives.
One way to apply this is a model like ours at Open Notes.
AI handles the obvious cases, while gray areas are surfaced, evaluated, and resolved when agreement bridges sufficiently across perspectives.
In practice, this drives three key effects:
- It makes brigading harder. You can’t just overwhelm the system, you have to convince it.
- It can produce outcomes that feel more legitimate, because different sides arrive at them.
- It should meaningfully reduce costs, replacing expensive expert queues.
It also shifts the tone of moderation -- from censorship to alignment. Moderation isn’t just about enforcement anymore. It’s about governance: how decisions get made, who participates, and what feels legitimate.
We’ve spent years optimizing detection. Now we’re starting to rethink decisions -- not just who decides, but what constitutes agreement in the first place.
Bridging is not a perfect answer, but it feels directionally right because it assumes something most systems ignore.
Disagreement isn’t the problem; it’s a given. On any platform at scale, people will see the same content differently. Most systems treat that as something to suppress or resolve quickly.
But it’s actually the raw material for better decisions -- if you know how to structure it.
Moderation shouldn’t just remove content. It should help communities resolve their disagreements.
That’s certainly harder, but it’s also the point.
In future posts, we’ll go deeper on how bridging works in practice: how contributors are selected, how disagreement is measured, and how these systems behave in real-world moderation scenarios.