AI Is Moving Fast. What Matters Most Now Is How We Handle the Shift.

In just a couple of years since ChatGPT 3.5 came out, AI has gone from something new to something everywhere. And now it’s moving even faster... because AI is helping build better AI.
It’s hard to keep up. Companies are racing. Countries are racing. Everyone’s trying to automate, get more done, and stay ahead.
Some people are excited. Some are scared. And honestly, both make sense. There’s real potential here... to improve healthcare, education, even reduce poverty. But there are also real risks... jobs changing, bad actors, global tensions.
We’ve never had a technology move this fast, or touch this many parts of life. The big question isn’t just “what can AI do?” It’s: how do we handle the shift? Because if AI really is going to change everything… then the transition is what matters most.
Some tech leaders are brushing off the risks. Others are being more direct. I think we need more of that... more honesty, more responsibility. This isn’t just about building cool things. It’s about making sure we’re moving toward something better, together.
The opportunity is real... so are the risks
There’s so much potential in AI. We’re already seeing signs of how it can help... faster medical breakthroughs, better tools in education, and new ways to boost productivity across nearly every industry.
Some leaders see this as the start of a new golden era. Like Demis Hassabis, CEO of DeepMind, who said:
“If everything goes well, then we should be in an era of radical abundance, a kind of golden era.”
Wired, 2025
Marc Andreessen goes even further:
“AI will not destroy the world, and in fact may save it.”
a16z, 2023
And Sam Altman, CEO of OpenAI, said this at Davos:
“We believe that we can manage through it… step by step… build these systems that deliver tremendous value while meeting safety requirements.”
WEF, 2024
Altman has also framed AI as a massive force for economic progress:
“AI will be net-positive for society, significantly accelerating economic growth.” —Sam Altman, Lex Friedman podcast, 2023
If handled well, he believes AI could increase consumer surplus, drive innovation, and free people up to focus on more meaningful work.
That kind of hope is real. And honestly, I share it.
But we also need to be honest about the risks.
AI could reshape the job market in ways we’re not ready for.
Dario Amodei, CEO of Anthropic, recently warned:
“AI could wipe out half of all entry-level white-collar jobs—and spike unemployment to 10–20% in the next one to five years.”
Axios, May 2025
He also said that many companies and governments are “sugar-coating” what’s coming... and called for policies to help people through the transition.
That stuck with me. Because while the breakthroughs are exciting, this shift is going to be hard for a lot of people. Especially if we don’t plan ahead.
I think we need to hold both truths at once.
AI has the power to improve the world... and it also has the power to leave a lot of people behind, fast.
The opportunity is real.
But so is the risk.
And if we only talk about one side, we’re not being honest.
The real risk is the transition
I don’t think AI is good or bad. I think it’s powerful. And what happens in the next few years... how we manage the shift.. is what matters most.
But it’s not just about avoiding the worst-case scenarios. If we guide this shift well... there’s also a real chance we build something better than anything we’ve had before.
We’re not just talking about better search or faster emails anymore. We’re talking about something that could reshape how people work, learn, live, and earn.
Some, like Elon Musk, see this not as an end... but as a new beginning.
“There will come a point where no job is needed.. AI and robots handle the work.” —Elon Musk, World Government Summit, 2023
He envisions a future where automation leads to an age of abundance... one where people receive a universal income and are free to spend more time on creativity, exploration, and connection.
It sounds liberating… but most of us aren’t even close to ready for it.
Satya Nadella, CEO of Microsoft, put it this way:
“We have to take the unintended consequences of any new technology along with all the benefits - and think about them simultaneously.”
wef, jan 2024
That balance is what’s missing right now. Too much of the conversation is focused on what AI can do, and not enough on what happens when it does.
Mo Gawdat, former Google X executive, added this:
“There’s no pause button. no ‘off switch’ once AI surpasses human intelligence.”
It’s a powerful reminder that once we cross certain thresholds, there’s no going back. And that’s why the transition... not just the tech.. is the real challenge.
Gary Marcus, one of the more skeptical voices in the field, said:
“In the long term, we have no idea how to make safe and reliable AI - and that just can’t be good.”
Forbes, 2024
It’s a blunt take, but one that’s hard to ignore. Especially as models become more powerful and start to make decisions we can’t always trace or control.
Dario Amodei.. who’s normally pretty balanced... has also said he doesn’t believe in slowing down innovation, but he’s pushed for something simple: transparency.
In a recent op-ed, he argued that AI labs should be required to share their testing methods and risk mitigation strategies before releasing new models.. because as models grow more powerful, transparency and accountability will matter more than ever.
I agree with that. Because whether you’re optimistic or cautious, what matters most is how we guide this transition. Not just for founders and companies, but for everyone this will impact.
What If We Get It Right?
It’s easy to focus on the risks right now. And they’re real.
But it’s worth remembering that some of the most credible voices in tech are also imagining a radically better future.. if we handle this transition wisely.
Bill Gates has wondered out loud if AI might help us reach a point where we only work two or three days a week. Jamie Dimon, CEO of JPMorgan, recently said our kids might grow up working just 3.5 days a week. Eric Yuan, CEO of Zoom, sees us moving toward a 32-hour workweek sooner than we think. These aren’t sci-fi predictions. These are CEOs of Microsoft, JPMorgan, and Zoom telling us a lighter, smarter workweek is within reach.
Not because we’re doing less…
But because AI is doing more.
Demis Hassabis, CEO of DeepMind, calls this a potential “golden era”- a time of radical abundance, where we eliminate scarcity and solve the world’s biggest problems. He believes AI could help cure diseases, discover new energy sources, and extend human lifespan.
Sam Altman sees AI as a “force multiplier” for creativity. Not replacing us… but freeing us. Freeing us from drudge work. Giving us more time for what makes us human.
Satya Nadella has talked about democratizing access to healthcare and education.. giving every student a personal tutor, every patient a smarter diagnosis, every small business a competitive edge.
Reid Hoffman calls it superagency: the ability for individuals to do what used to take entire teams.
Even Elon Musk, despite his warnings, has said AI could usher in an age where no job is needed for survival. Where people receive a universal high income and spend their time on creative or meaningful pursuits.
That’s the possibility we’re holding.
Not just a future of faster emails or automated customer support...
But a future where work is lighter, knowledge is freer, and opportunity is broader.
Of course, none of that happens by default.
We won’t arrive at abundance just by building faster models.
We’ll only get there if we guide this shift... ethically, carefully, and together.
Because AI is powerful.
And how we use it will shape not just our work.. but our lives.
We need honesty from leaders.. not just hype
Some leaders are brushing off the risks. Others are being more direct. I think we need more of that... more honesty, more responsibility.
This technology is moving too fast, and the stakes are too high, for vague answers or PR-friendly soundbites. Dario Amodei called it out clearly:
“Companies and governments are sugar-coating the coming disruption.”
business insider, may 2025
He’s not saying we should slow down innovation. He’s saying we need to be real about what’s coming... and start preparing people now.
Jensen Huang, CEO of Nvidia, offered a reminder that feels important right now:
“You’re not going to lose your job to AI. You’ll lose it to someone using AI.”
—Jensen Huang, Stanford University, 2023
In other words, AI won’t just replace people - it will amplify those who know how to work with it. But that shift still needs support, guidance, and access for everyone to benefit.
Elon Musk, who’s been sounding alarms about AI for years, put it this way:
“There’s a non-zero chance of it going Terminator. It’s not zero percent, It’s a small likelihood of annihilating humanity, but it’s not zero. We want that probability to be as close to zero as possible.”
Wall Street Journal, 2023
You don’t have to agree with everything Musk says to understand the point.
We can’t afford to ignore worst-case scenarios just because they’re uncomfortable to think about.
Leaders in this space... especially the ones building the most powerful models.. have a responsibility to be honest. To talk openly about tradeoffs. And to help guide the world through this shift, not just drop tools into it and hope for the best.
Because if we don’t talk about the hard stuff now… we’ll be forced to deal with it later, when it’s even harder.
What I’m doing now... and what I hope others will do too
Right now, I’m building an AI platform called Surfn AI.. designed to help businesses and creators grow faster with agents that actually get things done.
Not just answer questions or generate text… but take meaningful action.
And while I’m excited about what we’re building, I also believe that how we build matters just as much.
Satya Nadella said something that stuck with me:
“I don't think the world will put up anymore with any of us (in the tech industry) coming up with something that has not thought through safety, trust, equity.”
wef, jan 2024
I’ve been thinking a lot about this transition we’re in. What it means for work, for trust, for the future. And I don’t think anyone has it all figured out.
But I do think this:
If more of us... especially those building AI.. stay honest, stay human, and think long-term… we might just make this shift a good one.
As Demis Hassabis has suggested, we’ll need to radically change how we think and behave if we want to fully benefit from AI and avoid its downsides.