- AI with Kyle
- Posts
- AI with Kyle Daily Update 014
AI with Kyle Daily Update 014
Why AI is devastating to education + Google's vital misstep
The skinny on what's happening in AI - straight from the previous live session:
Highlights
🏆 Google Hits Gold Medal Week After OpenAI
Google's advanced Gemini with DeepThink mode officially achieved gold medal standard at the International Mathematical Olympiad, scoring 35 out of 42 points.
This comes just one week after ChatGPT's internal model did the same. Like ChatGPT's attempt, this was done without external tools - no Python, no calculators, pure reasoning through natural language.
Kyle's take: This is a four-minute mile moment. Once one person breaks through, everyone floods in behind them.
What's telling is Google were probably ready to pull the trigger weeks ago, but ChatGPT beat them to the announcement.
You can almost picture the weekend rush at Google: "We need to publish our results NOW!" The fascinating bit is we're seeing language models push into mathematics through reasoning alone - something that would have been mental to imagine just a few years back.
Source: Google DeepMind blog
💥 Google Knew Lambda Would Kill Their Business
Mustafa Suleyman, Google's former VP of AI, revealed on CatGPT's podcast that Google had Lambda - essentially ChatGPT before ChatGPT - but didn't release it because they knew it would be an existential threat to their search business.
Eighty percent of Google's revenue (at 75% profit margin) comes from search advertising, and they weren't willing to cannibalise that cash cow.
Kyle's take: This is the classic Kodak situation all over again. They invented digital photography but sat on it because it threatened their film business. Google makes £200 billion from search with 75% profit margins.
Even if OpenAI's £10 billion revenue doubled again, they'd still be 20 times smaller than Google's search empire.
The problem is website owners are already seeing traffic drops from AI overviews, and we're moving towards agents talking to agents rather than humans browsing websites. Google's trying to have their cake and eat it, but I reckon we'll be asking young people "What's Google?" in twenty years.
Source: Semiconductors Insight coverage
📝 The Education Crisis: Why AI Is Stealing Our Ability to Think
A brilliant tweet thread highlighted the central problem with higher education in the AI age: "Writing is not a second thing that happens after thinking. The act of writing IS thinking." Universities can't require take-home assignments anymore because students will just use ChatGPT, but we can't teach critical thinking without requiring proper writing assignments that involve researching, drafting, editing, and revising over weeks.
This is the central problem with higher education in the age of AI.
We can't require students to do take-home writing assignments (e.g. term papers) any more, because most will cheat and have ChatGPT or Claude or Grok do the writing.
But we can't teach critical thinking,
— Geoffrey Miller (@primalpoly)
11:00 PM • Jul 21, 2025
Kyle's take: This absolutely nails the difference between using AI efficiently versus using it to be lazy.
Those of us who learned to think and write manually are in a brilliant position - we can leverage AI to amplify skills we already have.
But if you're eighteen right now and going into university, you could rob yourself of ever learning to think properly. The process IS the point - not the final essay. I wrote 6,000 words every week at Oxford for three years, and that's what taught me to think critically. AI can make lazy people extremely lazy, or efficient people extremely efficient. The choice is yours…
Member Question from Uzi: "How would websites handle AI agents and bots?"
Kyle's response: Right now they're mainly blocking them by updating their robots.txt files to disallow agent access. But this is just a stopgap measure. Eventually we'll have AIs talking to AIs without needing websites as an interface. Look at China - loads of businesses don't have websites, they have mini apps on WeChat instead. That's probably where we're heading - skipping over websites to whatever comes next.
This question was discussed at [16:13] during the live session.
Member Question from Jack: "I can't comprehend the societal change once AGI is achieved. Struggle to imagine it going well."
Kyle's response: By definition, we can't comprehend artificial superintelligence - it's like a chicken trying to understand humans. The problem with slowing down is it needs to be unilateral. If you slow down individually, you get left behind. If companies slow down, competitors won't. If countries slow down, other nations won't. We're locked in a new arms race between America and China, and neither will blink first. My pragmatic view? We can't stop it, so we need to stay on top of it and work out governance structures fast.
This question was discussed at [32:16] during the live session.
Want to submit a question? Drop it below this video and I'll cover it in a future live.
Want the full unfiltered discussion? Join me for the daily AI news live stream where we dig into the stories and you can ask questions directly.