AI is Under-Hyped
If you think this is just another tech cycle, think again. Most people are missing the enormity of this moment. We’re crossing into a new era shaped by non-human intelligence.
Eric Schmidt’s recent TED2025 talk, “The Arrival of Non-Human Intelligence Is a Very Big Deal,” left me both inspired and unsettled. In his conversation with Bilawal Sidhu, Eric makes a compelling case: AI is not just hyped, it is actually underhyped. We are witnessing systems that can perform high-order reasoning, create original content, code autonomously, and navigate complex toolchains all on their own.
When I saw Devin code and deploy an app with zero intervention, I felt my stomach drop. Not because I was afraid but because I knew this was the moment everything changed. It reminded me of my first exchange with chatGPT. On that day the hairs on my body stood on end.
This isn’t some far-off sci-fi scenario: It’s happening now. The implications for how we work, live, and relate to the world around us are enormous.
“We are entering a world built by non-human intelligence. We don’t know what that world looks like yet.” — Eric Schmidt
This is a nonlinear leap into an entirely new space. One that will reshape economics, power, and even our understanding of what it means to be human.
And yet, we still reach for metaphors that are far too small: “It’s like the internet,” “It’s like the printing press,” “It’s like Henry Ford’s horses vs cars.” These analogies are comforting and dangerously naive. This isn’t about one industry or domain. This is general-purpose intelligence, capable of interacting across language, vision, tools, code, and strategy.
What makes AI different is that it’s not an invention, it’s an inventor. It’s not a tool, it’s a collaborator (and sometimes a competitor). We are entering an age of non-human intelligence: systems that can generate, reason, optimise, and execute with a degree of independence.
Mustafa Suleyman, co-founder of DeepMind and author of The Coming Wave (I loved this book btw), argues that this represents a “phase shift” which is a moment where we pass into a qualitatively new kind of civilisation. Not just digitised, but autonomised. Not just faster, but fundamentally different in its structure and capabilities.
“AI has been climbing the ladder of cognitive abilities for decades, and it now looks set to reach human-level performance across a very wide range of tasks within the next three years.”
― Mustafa Suleyman, The Coming Wave: AI, Power, and Our Future
We are entering a new system. This is the phase shift.
A Cultural Conversation Waiting to Begin
The technical conversation around AI is booming. But the cultural, civic, and ethical conversation is trailing far behind.
Thinkers like Tea Uglow offer essential insight. Her blog, Enough About That, asks deeper questions about what it means to be human in a world shaped by algorithms. Tea reminds us that we can’t outsource meaning-making to machines. We need art, language, rituals, and shared values to help us understand this strange new world.
And yet, most public discourse is stuck on a binary track: AI is either the next productivity miracle, or an existential threat to humanity. Both extremes miss the point. What’s most at stake right now is how we live alongside these systems and what we choose to preserve, let go of, redesign, or reclaim.
This is the time for conversation not only about the insane capabilities of AI, but about intention, purpose. The kind of world we want to live in.
Some Grounding Facts
It’s easy to get swept up in abstract discussion, so here are a few stark signals from the data:
📉 Job impact: According to Goldman Sachs (2023), generative AI could automate tasks equivalent to 300 million full-time jobs globally. In the U.S. and Europe, two-thirds of jobs are exposed to some degree of automation, and roughly 19% of jobs are considered highly exposed.
🧠 Human-level performance: The Stanford AI Index (2024) shows that large language models are now surpassing average human performance in tasks like bar exams, medical diagnostics, and legal analysis. GPT-4 and Claude 3 Opus outperform most junior professionals in knowledge-based tasks.
🛠️ Agentic software: Tools like Devin by Cognition, Adept, and Cognosys are not just passively generating responses. They are planning, executing, and collaborating across tools. They can browse the internet, manipulate data, and write + deploy code autonomously.
💸 Investment velocity: AI funding remains white-hot. In 2022, global private investment in AI hit $91.9 billion, more than doubling in two years (Stanford AI Index). This isn’t hype, I think this is industry reshaping itself in real time.
All this suggests we’re moving from assisted intelligence to autonomous systems, at speeds most institutions are structurally unprepared for.
Opportunity and Urgency
But this isn’t a post about panic. You know I’m not anti-AI. I’m excited by the endless possibilities: more personalised learning, distributed healthcare, new forms of creativity, climate science acceleration, citizen empowerment, and even potential shifts in global equity.
But we can’t realise those outcomes without a shared awareness of what’s at stake.
The question is no longer whether AI will transform our world. Right now the conversation we should be having is about how, for whom, and with what consequences.
“The next 10 years will be defined by how we choose to respond to the arrival of non-human intelligence.” — Eric Schmidt
We need to be part of shaping that response.
So Where Do We Go From Here?
We are living in a time of radical transition and it’s technological, cultural, and ecological. It’s tempting to turn away, to stay busy, or to let the experts or someone else handle it.
But we can’t afford to.
If we want a future shaped by human values, we need to engage now. That means:
Inviting artists, ethicists, educators, policymakers, and young people into the AI conversation.
Creating design frameworks that prioritise context, emotion, narrative, and systems thinking.
Embedding slowness, reflection, and care into how we design with intelligence. We need to to not only for speed or scale but for meaning, purpose and wellness.
Asking not just what can AI do, but what should it do, and what should we protect from it.
For founders and product leaders, this isn’t the time for incrementalism. AI-native tools don’t only accelerate workflows, they rewrite the assumptions behind them. If your product assumes humans are the primary actors in the loop, that assumption may not hold true for long.
“Over time, then, the implications of these technologies will push humanity to navigate a path between the poles of catastrophe and dystopia. This is the essential dilemma of our age.”
― Mustafa Suleyman, The Coming Wave: AI, Power, and Our Future
I believe something extraordinary is possible. But only if we recognise the magnitude of what’s unfolding. This is not a niche moment for technologists, this is a turning point for humanity. And no, it’s not like when we got the internet.

