May 6, 2025

Are We Outsourcing Our Brains to AI?

Are We Outsourcing Our Brains to AI?

Look, I'll be the first to admit it—I've asked AI to write an outline for this very podcast. And guess what? I barely glanced at it before hitting record. Why? Because like most of us, I'm trying to figure out where AI fits into my life without turning my brain into complete mush.

The False Promise of Artificial Wisdom

My co-host Jason recently had a fascinating conversation with his soon-to-be son-in-law about AI, which sparked today's discussion. The kid's only 23, but he was immediately skeptical of AI tools—not because they're new and scary, but because he already distrusts the internet as an information source.

And honestly? He's not wrong.

When AI spits back a perfectly formatted paragraph answering my question, I get that little dopamine hit of "problem solved!" But what's actually happening is far more complex. The machine isn't thinking—it's inferring. It's looking at patterns of information and constructing what looks like a competent answer.

Sound familiar? That's because it's exactly what we humans do all the time.

The Speed Problem

The real difference isn't in the process—it's in the speed. As Jason pointed out during our conversation:

"It gets you to that belief point faster. It takes a bunch of data, puts it together for you to form an idea or a belief faster than you would have."

This acceleration isn't inherently evil. We've been using technology to speed up knowledge acquisition forever. From encyclopedias to Google to Blinkist (which I use to digest books in 15 minutes), we're constantly trying to shortcut our way to understanding.

But there's something uniquely persuasive about AI responses that makes them dangerous. They sound authoritative. They feel conversational. They seem trustworthy—even when they're completely full of shit.

Ideas vs. Beliefs: The Critical Distinction

During our podcast, Jason made a point that stuck with me:

"If you can't change your mind about an idea, then it's a belief period. If you can't reasonably argue the other side, then you are arguing for your belief, not an idea."

This distinction matters enormously in the age of AI. When information comes at us so quickly and convincingly, we risk jumping straight from ignorance to belief, bypassing the crucial stage of "this is just an idea I'm considering."

According to a recent Stanford study, people are 31% more likely to accept information as factual when it comes from an AI system versus a human source—despite knowing the AI has no inherent understanding of truth.

How To Stay Smart in an AI World

So how do we use these tools without becoming intellectual zombies? After speaking with Jason and reflecting on my own experience, I've landed on a few practices that help:

  1. Ask better questions. As Jason put it: "If you think critically before you ask your question, you're going to get a better answer. If you ask a stupid question, you're probably going to get a stupid response."
  2. Treat AI outputs as starting points, not conclusions. When I use Blinkist, I'm not thinking "great, now I know everything about this topic!" I'm asking "did I hear anything here that makes me want to dig deeper?"
  3. Maintain intellectual humility. The most dangerous position is thinking you've got it all figured out. The smartest stance is "I think it's like this, but I don't actually fucking know."
  4. Recognize your confirmation bias. We're hardwired to seek information that supports what we already believe. AI makes it dangerously easy to get exactly the answers we want to hear.

The Real Danger Isn't the AI

The real danger isn't ChatGPT or any other AI tool. It's our own laziness—our unwillingness to engage in the hard work of critical thinking.

As Neil deGrasse Tyson has warned, AI could potentially be "the end of the internet as we know it" because deepfakes and AI-generated content will make it increasingly difficult to discern truth from fiction.

But the solution isn't abandoning these tools—it's getting better at using them.

AI is just another form of intelligence to factor into your worldview—sometimes brilliant, sometimes utterly wrong, always requiring your thoughtful engagement.

What do you think? Are you using AI tools critically, or letting them do your thinking for you? I'd love to hear your thoughts in the comments below or over at www.thefitmess.com.