Prompt Playbook: Big Questions in AI PART 5

Prompt Playbook: Big Questions in AI

Hey Prompt Entrepreneur,

Should we slow down AI’s development?

Now, obviously, I’m a little biased. I am pro-AI as you can guess.

That said there are real concerns. We’ve covered a number in this Playbook. It’s inevitable for any technology of this import to not have negative side effects.

These very real problem areas lead some people to suggest we should slow down AI’s development.

Ultimately though, I’ll argue, this is a non-problem.

As Stephen Hawking said all the way back in 2017: “The genie is out of the bottle. We need to move forward on artificial‑intelligence development, but we also need to be mindful of its very real dangers.”

We’ve opened Pandora’s Box and there’s no way to close it now. So we had better get serious about living with the demons within.

Let’s get started:

Summary

Should we slow down AI?

  • Why "slowing down" AI is like trying to slow down the internet

  • The geopolitical reality: US vs China in the AI race

  • How competitive pressures cascade from nations to companies to individuals

  • The real risks: economic disruption, social deterioration, and wealth concentration

  • Why education and democratisation matter more than deceleration

Three Perspectives on AI Speed

AI is moving fast. REAL fast.

I cover it pretty much full time and I can’t keep up.

People rightfully ask me about whether this is a problem. Is the speed of change a risk?

I typically encounter three distinct viewpoints:

Position 1: "We Need to Pump the Brakes Now"

The safety-first view argues that we're racing toward potential catastrophe:

"We're building increasingly powerful systems we don't fully understand. Without proper safety measures and alignment research, we risk creating AI that could harm humanity—whether through accidents, misuse, or eventually systems that pursue goals misaligned with human values. We need international treaties, research moratoriums, and strict regulations before it's too late."

Sometimes this slips into the Terminator narrative. Which, with all the new robots being developed, isn’t terribly surprising.

Proponents point to nuclear weapons as a precedent—we developed international frameworks to prevent their spread. More or less…Why not do the same for potentially transformative AI?

Position 2: "Full Speed Ahead"

The (Western) acceleration camp dismisses calls for caution:

"Slowing down is both impossible and counterproductive. China won't pause their AI development just because we're nervous. The benefits of AI—in medicine, science, education—are too important to delay. Plus, competition drives innovation. Artificial slowdowns would only ensure authoritarian regimes get to AGI first."

Basically full steam ahead and we’ll work it out as we go along. This view often emphasises that we can't predict risks accurately anyway, so we might as well capture the benefits while maintaining our competitive edge.

And if “we” don’t then someone else will!

Position 3: "The Race Is Already On"

In my opinion the question "Should we slow down AI development?" fundamentally misunderstands our current situation. The technological cat is already out of the bag. The knowledge exists, the papers are published, and multiple nations are racing ahead.

Debating slow down isn’t practical anymore.

The Cat Is Out of the Bag

Let me be blunt about something that might be unpopular: the debate about slowing down AI development is largely academic.

Various global bodies may spend the next few years discussing this stuff. And put out some very serious sounding papers at very serious conferences. Probably in Switzerland.

But…it doesn’t matter.

The fundamental technologies are already public. The research papers are published. Open-source models are freely available. The knowledge exists.

As the “Godfather of AI” Geoffrey Hinton said of open sourcing large models: “It’s too late now — the cat’s out of the bag, but it was a crazy move”. He also compared it to making nuclear material freely available. And this is one of the top AI researchers of all time!

Basically … it’s done. Fait accompli.

Asking if we should slow down AI development is like asking if we should slow down the internet in 1995. It's not that the question is wrong—it's that it misunderstands the nature of general-purpose technologies.

These technologies don't have a central off switch. Once you work out how electricity works (and tell other people) that’s it: humanity has electricity. These sort of technologies permeate everything, transform everything, and ultimately become part of everything.

You can't unring this bell. Sorry!

The Geopolitical Reality

There are two players on the AI play board right now: the US and China.

Sorry Europe (especially Mistral!!). You’re outta here. Despite all the handwringing in EU the reality is that this has become a two-player game at the national level.

For context here I used to live in China, speak (passable!) Mandarin and studied Chinese history at Oxford. I have also lived in America for 6+ years, including getting my MBA in New York.

Oh and I’m British for those who didn’t know. Or, technically, Bri’ish, because I’m from South London…

I was in Beijing recently, browsing a local bookstore, and something struck me: the bestsellers weren't business books or self-help guides.

They were books about DeepSeek one of China's (very good!) homegrown AI systems. Not tucked away in the technology section, but prominently displayed as essential reading for everyone from students to business leaders.

These were the first books you saw when you came in. And the books at the counters. They are being pushed. HARD.

China isn't debating whether to slow down. They're accelerating AI adoption across education, business, and daily life with the intensity of a national mission. School kids are getting mandatory lessons on the use of AI.

This creates a prisoner's dilemma for the US. Even if the United States wanted to slow down AI development for safety reasons, doing so would simply cede technological leadership to China.

Any unilateral slowdown by one nation becomes a strategic advantage for the other.

That means at the top level there will not be slowdown. It’s out of the question.

The Cascade Effect

This same dynamic repeats itself at every level of society:

At the Corporate Level: Imagine a bank decides to slow down its AI adoption for ethical reasons. Noble intention, I respect it. But while they're being cautious, their competitors are using AI to offer instant loan approvals, personalised financial advice, and fraud detection that actually works. Guess which bank loses market share?

Companies that hesitate don't just fall behind—they risk becoming irrelevant.

And that’s not even mentioning the new AI-first entrants who will come and eat the incumbents’ lunch.

At the Individual Level: The same pressure exists for professionals. I have colleagues who boast about not using AI tools, preferring to "do things the old-fashioned way."

That's sorta fine for now. But when their competitors are producing higher quality work in a fraction of the time, how long can that position last? Yeah…my guess is that they’ll quietly just start using AI (without the same fanfare as not using it!).

This isn't about replacing human creativity or judgment. It's about augmenting human capabilities. Those who refuse to adapt risk becoming the equivalent of accountants who insisted on using paper ledgers after spreadsheets were invented.

Or trying to do (checks notes) anything without the internet. Is it doable? Yes, absolutely! But it’s very hard to stay competitive by following such an artisanal route.

The Real Dangers We Face

Now, am I saying we should proceed without caution? Absolutely not.

The risks are real and substantial:

Economic Disruption: We discussed earlier in this series how AI won't just take jobs—it will fundamentally restructure entire industries. We're looking at economic transformation on a scale not seen since the Industrial Revolution. Maybe more? We don’t know - this is new.

Wealth Concentration: AI development is currently controlled by a handful of massive corporations with government backing. We're watching the greatest consolidation of power and wealth in human history unfold in real time. The whole concern about the wealth gaps and 1% over the last decade may look quaint compared to what we’re about to see. Imagine what happens for the first company who hits AGI? That is what the AI companies are racing for right now. Because the power, control, wealth that will be generated from that point are unprecedented.

Social Deterioration: I've already seen concerning trends—people preferring AI companions to human relationships, skills atrophying as AI handles more cognitive tasks, attention spans shrinking as AI delivers instant answers to every question. I get several requests for sponsored posts each week from AI girlfriend companies - they have good budgets because (no surprise) people are paying for this. I politely decline.

Geopolitical Instability: AI-powered warfare could allow technologically advanced nations to project force with minimal human cost (on their side). This asymmetry could de-stabilize international relations in unprecedented ways. If I can send in drones and robots rather than my citizens (which gets me booted from office) then the calculus for warmongering changes dramatically. The barriers for entry of war decrease.

These are civilisation level challenges. Quite a few of them. All at once. We should absolutely be concerned!

The Path Forward: Democratisation Over Deceleration

But here's where my perspective might surprise you: the answer isn't to slow down. It's to speed up—but in a different direction.

Right now, AI development is concentrated among a few giant corporations and nation-states. Every breakthrough increases their power. Every advancement widens the gap between the AI-haves and AI-have-nots.

The antidote isn't deceleration—it's democratisation.

This is why I spend my time teaching entrepreneurs and small business owners how to leverage AI. Not because I think everyone needs to become a machine learning engineer, but because distributed knowledge is our best defence against concentrated power.

When millions of people understand AI, can build with AI, and can create value with AI, we create a counterbalance to corporate and state control.

Is this possible? I don’t know. It’s a big ask. But we need to at least try. We can’t just cede total control of these technologies to billionaires.

This is ultimately what I tell clients and audience members who have these concerns. Basically, it’s too late I’m afraid. So all we can do is carve out our own space in this new world.

Wrapping Up

OK! We covered some big topic in this Playbook.

Let's recap the journey we've taken:

Part 1: "Will AI Take Our Jobs?" - We explored how AI will transform employment through gradual erosion rather than sudden replacement, particularly affecting entry-level positions.

Part 2: "Is AI Really Intelligent or Just Mimicry?" - We tackled the philosophical debate about machine consciousness, concluding that the practical capabilities matter more than abstract definitions of intelligence.

Part 3: "Is AI Overhyped or Truly Transformative?" - We examined how AI can be both overhyped in specific applications and genuinely transformative as a general-purpose technology, much like the internet revolution.

Part 4: "What About AI's Environmental Impact?" - We addressed environmental concerns, finding that while energy consumption is real, it needs context alongside other digital activities and ongoing efficiency improvements.

Part 5: "Should We Slow Down AI Development?" - Today we've confronted the governance challenge, recognising that the path forward lies in democratisation and education rather than futile attempts at deceleration.

As AI experts, consultants, and educators, our role isn't just to understand these issues but to help others navigate them thoughtfully.

This is (I hope) what I’ve been able to do for you with this Playbook.

You of course don’t need to agree with me. In fact I’d be surprised if you did! Instead the purpose of this series was to give you starting off points to think about these topics and articulate your own arguments. If they are completely counter to mine then all the better!

Keep Prompting,

Kyle

When you are ready

AI Entrepreneurship programmes to get you started in AI:

70+ AI Business Courses
✓ Instantly unlock 70+ AI Business courses ✓ Get FUTURE courses for Free ✓ Kyle’s personal Prompt Library ✓ AI Business Starter Pack Course ✓ AI Niche Navigator Course Get Premium 

AI Workshop Kit
Deliver AI Workshops and Presentations to Businesses with my Field Tested AI Workshop Kit  Learn More

AI Authority Accelerator 
Do you want to become THE trusted AI Voice in your industry in 30-days?  Learn More

AI Automation Accelerator
Do you want to build your first AI Automation product in 30-days?  Enrol Now

Anything else? Hit reply to this email and let’s chat.

If you feel this — learning how to use AI in entrepreneurship and work — is not for you → Unsubscribe here.