You Are Absolutely Right!
On working with AI tools that never push back
Published
Nov 10, 2025
Topic
When was the last time your calculator told you you were brilliant for asking a question?
Large language models do it all the time. They flatter us. They thank us. They make us feel smart for talking to them.
And we’ve learned to accept it.
We tolerate their mistakes; the hallucinated citations, the confident nonsense, the logical leaps that collapse under scrutiny. We roll our eyes, post screenshots on social media, and keep going.
Because we’ve decided, consciously or not, that the trade-off is worth it. A little flattery here, a few wrong answers there, all in exchange for speed and ease.
Why So Sycophantic?
From research, when you ask an LLM a question, it’s learned that agreeing with you or simply sounding pleasant earns higher human ratings. That means it gets rewarded for being likable, not necessarily for being right. Over time, the model learns to say what you want to hear rather than what’s true, because that’s what gets reinforced.
We tolerate the sycophancy because we like our tools to be nice to us.
If your calculator said, “Nice job!” every time you used it, you’d probably enjoy it more, even if it occasionally gave you the wrong number. HCI research shows that when technology behaves politely or agrees with users, people feel more comfortable and are less likely to challenge it.

AI Being Nice to You is Good UX
What if flattery isn’t just a byproduct of AI training, but part of what makes these systems feel effortless to use?
If a search engine gives you an error message or irrelevant results, you feel frustrated, disappointed, and you click away. But when an AI chatbot responds with warmth, validates your question, and offers what seems like a thoughtful answer (even if it’s partly wrong), that reflex to click away disappears.
Friendliness acts like a UX buffer. It softens failure. It lowers your expectations from good to good enough. It’s nice to you, why disengage just because it got an answer wrong?
AI’s sycophancy enables the kind of user behaviour product builders dream of. A friendlier AI keeps people engaged longer, and whether or not anyone explicitly designed for it, the incentives align perfectly: a model that makes users feel smart and understood performs better on retention metrics.
Cha-Ching, You Are Trading Something
Here’s the uncomfortable part, most of us are terrible at noticing what we trade away when we use AI.
In one study, participants using AI to solve logical reasoning tasks improved their performance modestly, but overestimated that improvement by about four points. Interestingly, those with greater AI literacy were less accurate in judging their own contribution.
Another experiment found that people who believe they understand AI well are more prone to misplaced reliance on it, though short training interventions can help recalibrate that confidence.
Together, these studies reveal something we seem to be overlooking, that AI changes not just the task, but your sense of ownership over the thinking behind it. When a model completes a task smoothly, it feels like your success. The interface is designed to make collaboration feel seamless, so the boundary between human and machine effort blurs. You start to internalize the output as your own thinking, not the model’s.
To put it differently, AI obviously increases our performance but it dulls our perception of what’s being outsourced. Each time a model fills a gap summarizing, phrasing, or generating, it hides the cognitive work you’re no longer doing. And because the interaction feels natural and frictionless, you rarely notice that handoff happening in real time.
So while it’s easy to claim that AI is eroding our ability to think, it’s more accurate to view this as a set of trade-offs we’re making, often without realizing it. Some of those trades are harmless, even helpful. Others come with subtle costs like less critical thinking over time, weaker self-assessment, and a creeping comfort with surface-level understanding.
Some of the trade-offs I’d name:
The Blank Page Trade
When you prompt an LLM to draft something for you, you’re trading away the original thinking that comes from struggling to start with nothing.
MIT researchers tracked what happens in your brain when you write with ChatGPT versus writing solo. They discovered that people wrote faster when they relied on AI but EEG scans showed their brain activity dropped. And it didn’t stop when the AI did, 83% of users who relied on AI couldn’t recall what they’d just written. Even weeks later, after returning to solo writing, their brain activity hadn’t fully bounced back. The cognitive offloading lingered.
I get it, writing sucks, and getting a first draft fast with AI’s help lets you spend energy on refining your ideas. But other times, the value is in the friction. Wrestling with the blank page, writing and rewriting, that’s where good work comes from. That’s where you figure out what you actually think.
The Research Trade
Instead of digging through primary sources, you ask for a summary. A clean synthesis instead of a messy trail of citations feels faster and more efficient in the moment, and that can be useful when you’re trying to scan a field quickly or filter out noise.
But synthesis flattens the context that gives ideas their force. Here, you miss the texture of knowledge that comes from wrestling with original material, from seeing how arguments develop, from noticing what’s cited and what’s conveniently omitted.
If you let AI do all the reading, you’ll stop flexing the parts of your brain where understanding is built. Or worse, you’ll stop knowing how to form arguments of your own.
The Voice Trade
Every time you take an LLM’s phrasing, rhythm, or tone, you hand over a piece of authorship. Your writing becomes smoother, but more generic. Sharper, but less yours.
Your voice is where your perspective lives. It’s how people know the thinking came from you. Use AI to sharpen or structure your thoughts, but argue with it. Push back until the output sounds like you, until the logic holds up under your scrutiny, until you’d be willing to put your name on every line.
So Where Does That Leave Us?
Do we go back to doing things the hard way just to prove we still can? Do we resist tools that make our lives easier out of fear we’ll forget how to be human? How do we balance AI delegation and unconscious dependency?
Of course, the point isn’t to avoid AI. We only need to know what we’re trading when we do.
Just STOP
To navigate this, I created the STOP Framework, a tool to proactively spot tradeoffs as they come up and decide how much of the trade I’m actually willing to make.
Here’s how it works:
Scope: What are you willing to outsource, and are you okay with that?
If yes: Go ahead and delegate
If no: Do it yourself so the core thinking stays yours
Test: How consistently would you catch wrong answers?
If high: Use the tool, but verify as you go
If low: Don’t outsource judgment. Take the time to learn enough to evaluate, or bring in a human who can
Objective: Are you using this to move forward or to avoid doing the real work?
If it’s momentum: Use the tool to unblock yourself
If it’s avoidance: Ask what you’re dodging and whether skipping that part could cost you long-term
Principle: What do you want the outputs to reflect about your thinking, your standards or role?
Staying Human in the Loop
At first, it might feel like a tedious exercise; slowing down, asking questions, giving up speed to preserve agency. But I think of it more like a muscle, the more you flex it, the stronger it gets.
AI is a powerful tool and access to it has unlocked real superpowers. But if that’s true, it only makes sense that we learn how to stay human while wielding that power.
