Let’s be honest. We didn’t see it coming. Or worse—we did, and we ignored it. Not because we lacked intelligence or insight, but because convenience and novelty were far more seductive. There we were, marvelling at the magical rectangles in our pockets, even as the warning lights started to blink red.
From 2007 to 2015, we handed our kids iPhones and congratulated ourselves on being tech-savvy parents. We live-tweeted revolutions, filtered our lives through Instagram, and nodded along to the gospel of “permissionless innovation.” Meanwhile, sociologists, psychologists, and a few lone technologists were shouting from the sidelines like Cassandras at a rave. “This’ll end badly,” they said. We muttered something about Wikipedia and kept scrolling.
Fast forward to 2025, and we’re now in full existential freakout mode over AI. Think: not just the end of dinner table conversation—the end of humanity. So how did we go from “Facebook might be bad for teens” to “ChatGPT might kill us all”? And more importantly, what does it say about our capacity to learn from past mistakes?
Welcome to the boiling frog versus the asteroid. One slow, simmering demise. The other, a cataclysmic bang.
Boiling Frog Tech: Social Media (2007–2015)
Back in the golden age of avocado toast and millennial optimism, we got hooked on smartphones. They were sleek, sexy, and offered dopamine on tap. Instagram gave us a filtered version of life, Twitter made us pundits in 140 characters, and Facebook made us believe we were all still friends. But according to people like Sherry Turkle, Nicholas Carr, and Jaron Lanier, these tools weren’t just changing how we communicated—they were quietly rewiring our brains.
Empathy was eroding. Attention spans were fragmenting. The ability to sit still with our thoughts—or with each other—was withering away. Yet despite these early warnings, we stayed enamoured with our new toys. We didn’t listen. Why? Because the internet was cool. It was revolutionary. The Obama administration saw tech as a force for democracy. Silicon Valley was promising free stuff, innovation, and frictionless disruption. And regulation? That was for dinosaurs.
The party was roaring, and no one wanted to be the one to turn off the music.
The Great Rewiring
Jonathan Haidt coined it “The Great Rewiring”—the period from roughly 2012 to 2015 when everything subtly shifted. Adolescent mental health nosedived just as smartphone adoption hit critical mass. Teens went from playgrounds to “like” buttons. From face-to-face friendships to filtered selfies. From unsupervised play to curated feeds.
Depression, anxiety, and loneliness skyrocketed across the Anglosphere. But our main concern at the time? Whether autoplay should be on by default. While academics waved red flags, we debated how many likes were too many to be considered “thirsty.”
And as it turned out, the damage wasn’t limited to mood. The rewiring ran deeper. Attention spans shortened. Sleep patterns fractured. Children were growing up in digital sandboxes, where connection was measured in streaks and self-worth by engagement stats.
The Asteroid Panic: AI (2022–2025)
Now the mood’s shifted. Dramatically. We’re not just worried AI will steal jobs. We’re worried it’ll end us. Elon Musk, Nick Bostrom, and Eliezer Yudkowsky have all made headlines warning of extinction-level events. Suddenly, “p(doom)” is a thing. And it’s trending.
The rhetoric has gone from dystopia-lite to full-blown existential dread. This isn’t about being ghosted by your AI girlfriend—it’s about being vaporised by your synthetic overlord. One minute you’re asking ChatGPT to write your CV; the next, you’re wondering if it’s rewriting the laws of physics behind your back.
So, why the leap from mental health meltdown to species extinction?
Because we blew it last time. And we know it.
Trauma Response: Fool Me Once…
Tristan Harris, the conscience of Silicon Valley, sums it up best: social media was humanity’s first contact with AI—and we lost. Algorithms trained to keep us scrolling turned into engines of outrage, misinformation, and teen anxiety. We watched it happen and did nothing.
So now that ChatGPT can write poems, code apps, and (almost) pass the Turing test, we’re panicking. And maybe, just maybe, we should. Because this time, the stakes feel far less metaphorical.
AI isn’t just hijacking our attention—it’s potentially hijacking civilisation. The same failure to align tech with human values that broke democracy in the social media age could break reality itself. If social platforms taught machines to manipulate us, generative AI may teach them to replace us.
Regulation: Still Lagging
Despite early attempts—Do Not Track, the Privacy Bill of Rights—nothing stuck. Lobbyists won. The internet stayed wild. Meanwhile, data-driven platforms consolidated power, profited from attention, and turned privacy into a quaint memory.
The result? We missed our chance to steer the first tech tidal wave. Now, faced with a second—Generative AI—we’re scrambling to slap on the life jackets. The difference is, this one’s not just going to disrupt the ad industry. It might rewrite civilisation.
Final Thought: From Shallow Waters to Deep Trouble
The critics of the smartphone era weren’t wrong—they were just early. Their warnings about the erosion of thought, attention, and empathy weren’t scaremongering. They were prologues.
Today’s AI panic isn’t just about AGI gone rogue. It’s about guilt. It’s about the neural wreckage of the last decade. And it’s about time we start listening.
Because if the boiling frog taught us anything, it’s this: by the time you realise you’re cooked, it’s already too late.
But what if we’re not too late this time? What if the very panic we’re feeling now, however messy, emotional, or alarmist, is the immune response we lacked before? What if this moment of existential anxiety is finally the catalyst for building tech with foresight, ethics, and human dignity baked in from the start?
Maybe, just maybe, this time we jump out of the pot before it boils, or better yet, we stop building the fire altogether.
If you’re curious what that future might look like, if we do get it right, if we build responsibly, ethically, and with intention, then you’re going to want to read this: Read the follow-up article.



