- Prompt Entrepreneur by Kyle Balmer
- Posts
- Prompt Playbook: Big Questions in AI PART 2
Prompt Playbook: Big Questions in AI PART 2
Prompt Playbook: Big Questions in AI
Hey Prompt Entrepreneur,
"My AI chatbot seems to understand me better than my spouse does," a client recently told me.
Half-joking I think…
"So is it actually... intelligent? Or is it just really good at faking it?"
As we interact with systems that can write poetry, solve complex problems, and engage in what feel like meaningful conversations, the line between simulation and genuine intelligence begins to blur. Increasingly so as the models get more sophisticated.
When Claude or ChatGPT responds with apparent empathy or insight, are we experiencing something truly comparable to human intelligence, or simply an elaborate illusion created by pattern matching?
Let’s get started:
Summary
Is AI intelligent?
AI intelligence vs. mimicry
Three distinct perspectives on machine intelligence
Why token prediction creates the appearance of understanding
The emergent properties of scale in large language models
How our definition of intelligence keeps shifting
A practical framework for discussing AI capabilities with clients
The Philosophical Quandary
The question of whether AI is "truly intelligent" or merely mimicking intelligence has been debated since the very beginning of the field.
Turing sketched out the first “computer” (as we would understand it) to solve a mathematical problem and shortly afterwards speculated whether these machines could ever be intelligent.
Smart guy. Scary smart.
Whether AI is “intelligent” touches on fundamental questions about consciousness, understanding, and what it means to think—questions that have honestly occupied philosophers for millennia and remain (largely) unresolved.
When fielding this question, there are several distinct perspectives you could take:
Position 1: "AI Systems Possess a Different Kind of Intelligence"
Some argue that AI systems like large language models demonstrate genuine intelligence, just of a different kind than human intelligence. These systems can reason through complex problems, identify patterns humans might miss, and generate novel solutions.
In this view, we should broaden our concept of intelligence beyond human cognition. Stop being so anthropocentric - as we are wont to be! Intelligence should be defined functionally—by what a system can accomplish—rather than by how it accomplishes it or whether it has subjective experiences.
Position 2: "AI Is Advanced Mimicry With No Real Understanding"
The opposing view holds that current AI systems are essentially sophisticated pattern-matching machines with no actual understanding of the content they process. In this perspective, what looks like intelligence is actually just statistical correlation at massive scale.
Which…kinda makes sense considering how LLMs work. They are mass probability engines.
Proponents of this view often cite the Chinese Room thought experiment proposed by philosopher John Searle: a person who doesn't understand Chinese follows instructions to manipulate Chinese symbols, producing appropriate responses without comprehension. The system as a whole appears to understand Chinese, but no actual “understanding” exists anywhere within it. The system is pure mimicry.
Position 3: "The Distinction Between Real Intelligence and Simulation May Not Matter"
My position is more pragmatic. Surprise surprise!
The question itself might be less important than we think. Or even meaningless. Current AI systems are neither conscious minds nor simple mechanical calculators—they occupy a new and interesting space that challenges our existing categories.
The Token Prediction Explanation
At their core, large language models like GPT-4 or Claude are token prediction engines. When you input text, the model predicts what tokens (roughly, pieces of words) are most likely to follow based on patterns it observed in its training data.
This is fundamentally different from how humans think. These systems don't have intentions, beliefs, or desires. They don't "know" what words mean in the way we do—with connections to lived experience, emotions, and physical sensations. They're processing statistical patterns, not meaning as humans understand it.
However, this doesn't mean these systems are simple or unimpressive. The scale at which they operate leads to emergent properties that weren't explicitly programmed.
Just as complex behaviours can emerge in natural systems (like how individual ant behaviours create sophisticated colony structures or a flock of starlings forms a murmuration), complex capabilities emerge from these massive statistical models that weren't directly encoded.
Hell, life itself and evolution thereafter led to complex outcomes from simple mechanical processes. Given enough scale (be it geological time or immense data sets and compute) some pretty wild things emerge unbidden.
It's also important to recognise that LLMs represent just one approach to AI—albeit the dominant one at this moment. We’ve had many types before which is why knowing the history is helpful!
We’ll also have many types hereafter. The field continues to evolve, and future breakthroughs may come from entirely different architectures or approaches. Many experts believe that truly intelligent systems will ultimately require different approaches or hybrid methods that go beyond the statistical prediction paradigm. We’ll see!
The Moving Goal Post Problem
From emergence come behaviour that seem very “intelligent”. And it keeps happening again and again in the world of AI.
But we humans keep shifting back the goal posts. We’ll declare something like “computers are great at calculations sure but they’ll never beat a grandmaster chess player”. We set arbitrary boundaries for what is human intelligence. Which then get demolished.
Consider this pattern:
1997: "Chess requires unique human intelligence"... until Deep Blue beat Kasparov.
2016: "OK has a finite move set so of course a computer could brute force it. But Go is too intuitive for computers"... until AlphaGo beat Lee Sedol.
2020: “OK but that’s just games in a controlled environment. AI can’t deal with real world tasks like driving which require real-world perception and split-second judgment”… until Waymo launched driverless taxis in Phoenix.
2023: “Driving is mainly mechanical. AI can’t handle complex reasoning or professional tasks”… until GPT-4 started passing bar exams, acing medical boards, and writing better code than junior devs.
This perfectly illustrates what comedian Louis C.K. observed in his famous bit about "Everything is amazing and nobody's happy." He describes being on a plane with Wi-Fi: "I'm watching YouTube clips. It's amazing. I'm on an airplane! And then it breaks down. And they apologise, the Internet's not working. And the guy next to me goes, 'This is bullshit!" His response captures our relationship with technology perfectly: "How quickly the world owes him something he knew existed only 10 seconds ago?"
This one sentence sums up humanity’s reaction to AI. How quick we feel we are owed these superpowers!
We've become so accustomed to technological advances that even the most extraordinary achievements—like passing the Turing test—are quickly taken for granted.
Whenever AI masters a skill previously thought to require human intelligence, we tend to say, "Well, that's not really intelligence." The goal posts for what constitutes "real intelligence" keep moving.
The REAL problem here is that we, as humans, don’t even have an agreed upon definition of “intelligence”.
We do not know fully what’s going on in that squishy mass of tissue inside our skulls. How is it possible of such wondrous things? How does a lumpy collection of cells firing electric signals create the Mona Lisa?
We don’t really know.
So how on earth could we have a definition of what the artificial version looks like?
The Turing Test Perspective
Alan Turing anticipated this philosophical quagmire before AI actually existed.
Again, smart lad.
Rather than getting bogged down in definitions of "intelligence," he proposed the Imitation Game (now known as the Turing Test) in 1950 as a practical approach.
The test focuses on functionality rather than internal mechanisms: if a machine can consistently fool humans into thinking it's human through conversation, does it matter if its internal processes differ from human cognition?
Turing wasn't concerned with whether machines were "truly thinking" in the human sense. Because, duh, we don’t know what truly thinking is in a human.
Turing was instead interested in whether they could produce outputs indistinguishable from human outputs—a much more practical question.
Fun fact: in early 2025, researchers from UC San Diego conducted a proper Turing test with GPT-4.5. When prompted to adopt a humanlike persona, GPT-4.5 was judged to be human 73% of the time—significantly more often than the real human participants. After 75 years, the Turing test was finally passed.
And the public reaction? A collective shrug.
We moved the goalposts. Yet again!
A Practical Approach for AI Experts
So how should you respond when clients or audiences ask whether AI is "really intelligent"?
I find it most useful to reframe the question entirely.
Rather than diving into philosophical waters about the nature of consciousness or understanding, focus on what these systems can and cannot do, how they differ from human intelligence, and what that means for practical applications.
What matters isn't whether these systems "think" in the human sense, but how they can complement human thinking, what new capabilities they enable, and what risks or limitations we need to be aware of.
The history of technology shows that the most transformative tools aren't those that perfectly replicate human abilities, but those that extend them in new directions.
The calculator doesn't think like a mathematician, but it dramatically enhances our mathematical capabilities
Engines do not replicate our muscular movements but they make physical labour much easier.
Similarly, AI systems don't need to replicate human cognition to be revolutionary.
Argument Summary
When addressing whether AI is truly intelligent or merely mimicking intelligence:
If intelligence is defined by observable capabilities and outputs rather than internal processes or consciousness,
Then the distinction between "real" intelligence and "simulated" intelligence becomes largely philosophical rather than practical.
If large language models are fundamentally prediction engines working with statistical patterns,
Then they lack human-like understanding but can still produce results that require intelligence when humans perform them.
If history shows we continually redefine intelligence once machines master previously "intelligent" tasks,
Then focusing on specific capabilities and limitations is more productive than debating whether AI is "really" intelligent.
Again, this is just one perspective—as AI continues to evolve, our understanding of intelligence itself may transform. It will be argued about by philosophers, cognitive scientists and AI researchers. Fantastic! Let them get to it.
But for the rest of us - the only practical answer here is functional. What can AI do and what can’t it do? How can we use this technology to improve our lives? Focusing here will be far more rewarding.
What's Next?
Tomorrow, we'll tackle another common challenge: "Is AI Overhyped or Truly Transformative?" This question tests your ability to navigate between techno-optimism and realistic assessment of AI's current capabilities and limitations. I'll share how to provide a balanced perspective that acknowledges both the revolutionary potential and the very real constraints of today's AI technologies.
Keep Prompting,
Kyle

When you are ready
AI Entrepreneurship programmes to get you started in AI:
70+ AI Business Courses
✓ Instantly unlock 70+ AI Business courses ✓ Get FUTURE courses for Free ✓ Kyle’s personal Prompt Library ✓ AI Business Starter Pack Course ✓ AI Niche Navigator Course → Get Premium
AI Workshop Kit
Deliver AI Workshops and Presentations to Businesses with my Field Tested AI Workshop Kit → Learn More
AI Authority Accelerator
Do you want to become THE trusted AI Voice in your industry in 30-days? → Learn More
AI Automation Accelerator
Do you want to build your first AI Automation product in 30-days? → Enrol Now
Anything else? Hit reply to this email and let’s chat.
If you feel this — learning how to use AI in entrepreneurship and work — is not for you → Unsubscribe here.