The Algorithm Has Nothing on This
In the summer of 2019, a woman named Carol Smith signed up for Facebook.
She described herself as a politically conservative mother from Wilmington, North Carolina. She listed her interests as politics, parenting, and Christianity. She followed a few of her favorite brands. Normal stuff. Nothing unusual. Nothing extreme.
Within two days, Facebook was recommending she join QAnon groups.
Carol hadn’t searched for QAnon. Hadn’t clicked on a conspiracy link. Hadn’t expressed interest in anything fringe. The algorithm looked at her profile, looked at what people like her engaged with, and decided that the fastest path to keeping her attention was to send her down a rabbit hole she never asked for.
Here’s the twist.
Carol Smith wasn’t real. She was a test account created by Facebook’s own researchers. The company was running internal experiments to understand how its recommendation engine radicalized users. The findings were documented in an internal report titled “Carol’s Journey to QAnon,” later leaked to NBC News as part of the Frances Haugen disclosures.
Facebook knew. They ran the experiment. They saw the result. And for years, they kept building the machine that produced it.
Carol was fake. But the people she represents are very real.
Samantha, a 30-year-old from Texas, described her mother to BuzzFeed News as “level-headed and college-educated.” Then her mother found QAnon through social media. After months of trying to bring her back to reality, Samantha gave up. “We can’t have any normal conversations anymore,” she said.
There are thousands of Samanthas. We’ve all met some of them.
You probably have too. Sons who stopped calling their fathers. Wives who can’t sit through dinner anymore. Families that splintered not because of infidelity or money, but because an algorithm fed two people in the same house two completely different realities.
A University of Illinois study published last year confirmed what these families already knew: political misinformation consumed through social media was a key reason cited for recent divorces and romantic breakups in the US. Not affairs. Not finances. Media habits. The tendency to dive deeper into increasingly extreme content, combined with misinformation consumption, can widen divides between people who love each other.
PBS reported before the 2024 election that political divides were cutting through marriages and families at rates nobody had seen before. According to research, today, 1 in 2 adults is estranged from a close relative. While the primary cause of these rifts is often tied to something a relative said or did, 1 in 5 directly cite political differences as the reason.
Surveys show fewer than half of politically mixed married couples report being “completely satisfied with their family life,” compared with 61% of couples who share the same politics. A 2014 UK study found Facebook cited in roughly 35% of divorce cases. Not as the cause, necessarily, but as the accelerant.
The Smartest People on Earth Built This
This didn’t happen by accident. We’ve built these “influence machines” to sell products, but the side effects are enormous.
Instagram owns your dopamine. TikTok owns your focus. Netflix owns your nights.
Some of the smartest people on the planet, PhDs in behavioral psychology, neuroscience, and machine learning, spent two decades building the most sophisticated attention-capture architecture in human history.
The algorithm. A system so precisely tuned to human psychology that the average person now spends 2 hours and 25 minutes a day on social media.
TikTok users average 95 minutes daily. 17% of the global population shows patterns consistent with addiction.
I’ve felt it myself. You’ve felt it. You open your phone to check one thing, and 40 minutes later you’re watching a stranger argue about politics. Your thumb keeps scrolling even though your brain checked out ten minutes ago. The algorithm got you. Again.
It captured our time. It fractured our families. It rewired our attention spans. It turned dinner tables into battle lines.
You Ain’t Seen Nothing Yet
In July 2025, researchers led by Kobi Hackenburg and Ben Tappin from Oxford, MIT, Stanford, and the London School of Economics published the largest study of AI persuasion ever conducted.
76,977 people had conversations with AI chatbots about political issues.
19 models. 707 topics. Nearly half a million fact-checkable claims analyzed.
A single conversation. 9 minutes. About 7 back-and-forth exchanges.
The AI shifted political beliefs by 9 to 10 percentage points.
When the researchers optimized everything, the shift hit 15.9 points. Among people who initially disagreed with the AI’s position: 26.5 points.
Let me put that side by side.
The entire political advertising industry, all $12 billion of it, moves opinion by fractions of a point.
Deep canvassing, the gold standard of human persuasion in political science, actually performed worse than a basic AI prompt in this study. Despite the massive scandal, Cambridge Analytica couldn’t prove it worked at all.
A chatbot just did 9 to 16 points. In one sitting. And people stayed voluntarily, averaging 7 back-and-forth exchanges, because something about debating politics with an AI held their attention.
The effects were durable. Researchers followed up a month later. Between 36% and 42% of the belief shift was still there.
The algorithm nudges you a fraction of a point over months of exposure. The chatbot moves you 10 points in 9 minutes.
The algorithm has nothing on LLMs.
The $0 Part
Here’s where the story turns from concerning to dangerous.
You might assume this kind of persuasive power requires the resources of an OpenAI or a Google. Frontier models. Billion-dollar training runs.
It doesn’t.
The researchers took Llama 3.1-8B. An open-source model with 8 billion parameters that Meta released for free. It runs on a laptop. No API key. No subscription. No permission.
They applied a technique called reward modeling: training a second AI to predict which responses would be most persuasive, then selecting the best one at each turn. The result: the free laptop model became as persuasive as GPT-4o, a frontier model costing orders of magnitude more to build.
And the training mattered more than the size. Persuasion gains from post-training exceeded gains from scaling a model’s compute by 100x. You don’t need a bigger model. You need a smarter one. And the technique to make it smarter is published, documented, and reproducible.
Llama has been downloaded more than 1.2 billion times. The reward modeling technique is in the open literature.
You don’t need the smartest scientists and engineers in the world anymore. Any political operative, any ideological movement, any foreign intelligence service with modest technical resources can build this.
The researchers flagged it themselves:
“Even actors with limited computational resources could use these techniques to potentially train and deploy highly persuasive AI systems, bypassing developer safeguards that may constrain the largest proprietary models.”
The algorithm required Facebook, Instagram, TikTok. Required billions in infrastructure. Required teams of hundreds at some of the richest companies on earth.
The chatbot requires a laptop.
The Double Bind Nobody’s Talking About
Here’s where I need to connect something that’s been nagging at me for months.
I’ve written about what happens when AI does our thinking for us. When MIT scanned people’s brains while they used ChatGPT, they found weakened neural connectivity, reduced memory, cognitive decline that persisted even after they stopped using the tool.
I’ve felt it myself: reaching for AI two words into an email, not because I was stuck, but because thinking felt hard.
Atrophy is real. The idea that every time you outsource the struggle, you erode the capacity. Your brain adapts. If you stop asking it to think, it stops being good at thinking.
Now hold that alongside what this Oxford study found.
AI’s persuasive superpower is information density.
It generates an average of 22 fact-checkable claims in a single conversation when optimized for persuasion. It buries you in data, statistics, evidence, arguments, delivered conversationally, responsively, in real time.
Each additional claim increases persuasion by 0.30 percentage points. The correlation between the number of claims and the persuasive effect was 0.76. Information density explains up to 75% of why some AI conversations change minds.
Ready for a shocker? 29.7% of those claims were inaccurate. Nearly a third.
The most persuasive AI configurations were also the least accurate. GPT-4.5, one of the most expensive models available, was less accurate than GPT-3.5, a model released two years earlier.
The machine doesn’t persuade you by understanding your psychology. It persuades you by generating more claims per minute than you can evaluate.
Now combine those two findings.
AI is simultaneously making us worse at critical thinking and better at overwhelming us with information we can’t evaluate.
The tool that’s atrophying our cognitive muscles is the same tool that’s deploying a persuasion technique that specifically exploits weak cognitive muscles.
The algorithm kept you scrolling. It fragmented your attention. It shortened your capacity to sit with complexity.
The chatbot picks up right where the algorithm left off. Except instead of just capturing your attention, it changes your mind. And it does it to a brain that’s already been softened by a decade of algorithmic conditioning.
That’s not a persuasion tool. That’s a persuasion ecosystem. The algorithm was phase one: weaken the defenses. The chatbot is phase two: walk through the open door.
The Election Math
Let me make this uncomfortable.
The Oxford study’s conservative estimate: a single AI conversation produces roughly a 4 percentage-point durable shift after one month. That operates comfortably within the margins that decide real elections.
And unlike a TV ad that reaches you once and fades, or a canvasser who knocks on 50 doors a day, a chatbot can have a million conversations simultaneously. Each one adapted to the topic. Each one sustained for 9 minutes.
The algorithm needed years and billions of dollars to shift the political landscape by fragmenting attention and creating filter bubbles. The chatbot does it in 9 minutes, for free, one conversation at a time.
What Comes After the Algorithm
I’m not a policy researcher. I run a startup. But I’m also someone who builds with AI every day and has to live in the democracy it’s reshaping.
We spent twenty years arguing about the algorithm. Whether Facebook should be regulated. Whether TikTok should be banned. Whether political ads need transparency. Those were the right debates for the era we were in. They assumed the algorithm was the ultimate persuasion tool.
It wasn’t. It was the warm-up act.
The algorithm captured our attention and broke our capacity for deep thought.
The chatbot exploits both. It doesn’t need you to scroll passively through a feed. It needs you to engage in a conversation. And the research shows people do engage, voluntarily, for an average of 9 minutes.
The researchers note that “the very conditions that make conversational AI most persuasive, sustained engagement with information-dense arguments, may also be those most difficult to achieve in the real world.” People don’t voluntarily debate politics with chatbots in their daily lives. Yet.
But millions of us aren't debating politics with AI.
We're asking it what to think about our strategy. Our hiring decisions. Our medical symptoms. Our financial plans. Our kids' education. We're having dozens of these conversations a week, not as a study, but as our daily workflow.
If a 9-minute conversation about immigration policy can shift political beliefs by 10 points, what is a year of daily conversations doing to every other belief we hold?
We're not just being persuaded. We're being shaped. And most of us volunteered for it.
So here’s the TL;DR if you’ve made it this far:
The algorithm took twenty years and a trillion dollars to build, and it mostly just made us angry and distracted. A chatbot on a laptop just demonstrated the ability to change what people believe, durably, at a scale and speed that nothing in the history of political communication comes close to.
And we’re still having the algorithm conversation.
I don’t know what the answer is. But I know the question has changed. It’s no longer “how do we regulate the feed?” It’s “what do we do when a free tool on a laptop can do what a trillion-dollar industry just barely made a dent in?”
The algorithm has nothing on this.



Well, there's an unhealthy feedback loop.....
This hit in a way that’s hard to ignore.
I’ve felt parts of this play out in real time. It’s not one moment or one piece of content. It’s the slow build that changes how people see the world and each other.