Over the past couple of years, artificial intelligence, or AI, advocates have dreamt of an abundant future. In his blog, Sam Altman, the CEO of OpenAI, predicted, “In the 2030s, intelligence and energy — ideas, and the ability to make ideas happen — are going to become wildly abundant.” This abundance, though lacking in distinct features, connotes a future of collective prosperity.
Recently, a counter narrative has been gaining momentum.
Last April, a team of AI researchers and futurists released AI 2027, a strangely compelling and largely pessimistic timeline proposal for the immediate future of AI. Spearheaded by former OpenAI safety researcher Daniel Kokotajlo, the report envisions that 2027 will see the creation of a dangerous and highly disruptive superintelligence with the potential to systematically eliminate the human race in the name of resource extraction and economic expansion.
Since its release, the report has garnered unprecedented attention. Sensational headlines like “AI Forecast Predicts Humanity’s End by 2027” and direct references from J.D. Vance have opened floodgates for a new wave of anxiety over AI’s role in the future. And while the apocalyptic timeline of AI 2027 has been contested (in response to AI 2027, two Princeton computer scientists released “AI as Normal Technology”), it nonetheless predicted 2025’s newest AI buzzword: agency.
In the last six months, most of the American AI titans such as OpenAI and Google have rolled out “agentic” AI models for consumer use. Unlike generative AI, which exists in a mostly closed relationship with users, agentic AI models are capable of interfacing with real-world systems and require minimal oversight. While a traditional model could provide a list of the best hotels in Cabo, an AI agent could interface with hotel booking websites and buy you a ticket.
In the past couple months, hundreds of articles — including dozens by Forbes in the last two weeks — have been published about agentic AI, heralding a massively disruptive force to white-collar wage labor. Apocalypse aside, it seems highly plausible that these models will generate unprecedented amounts of surplus value for early adopters.
On Aug. 2, in the midst of this fervor, UC Berkeley’s Center for Responsible, Decentralized Intelligence (RDI) hosted a summit for agentic AI. Riding on the momentum of an online curriculum with over 23,000 participants, the summit sold out days in advance.
I couldn’t miss an opportunity like this. A year ago, I picked up a minor in data science under the vague pretense of making an important career move. Since then, the post-grad employment rate for UC Berkeley students with data science degrees has dropped nearly 10%. With rumors of an apocalypse, or at the very least mass economic disruption, I had to believe there was a reason to remain hopeful.
...
I arrived at 10 a.m. and was met with a line spanning two floors of the Martin Luther King Jr. Student Union. Most attendees were not UC Berkeley students but industry professionals, all well-dressed and over 30. They tried their best to ignore an ominous heckler.
“Does anybody have a sense of humanity left? The least you could do is not ignore me!”
I made my way up to the third floor. Posterboards and demo booths lined every inch of open wall space and attendees were squeezed shoulder to shoulder. It took minutes to get close enough to a researcher to hear a pitch; by the time I had, he had been asked to relocate because he was blocking doors.
I stuck to the wall, shimmying close enough to booths to catch brief glimpses into possible agentic innovations: improving depression screening; optimizing data science pipelines; creating inhumanly talented Pokemon playing bots. The end of every presentation saw speakers surrounded by outstretched hands, holding LinkedIn QR codes and rushing to make necessary inroads.
Eventually, I was drawn into a small booth in the corner after reading the flyer: “Sure, you’re busy. But look closer. In three years, missing out on us could leave you playing catch-up. In ten, the future might feel like it passed you by.”
Reading that the organization — NobelEra — was funded by the Nobel Foundation, I couldn’t ignore this offer. Did they really have the keys to the future? I asked David, an AI researcher manning the booth, to give me the rundown.
Agreeing to speak with me outside, he told me that NobelEra was “a high IQ club for the best young people.” He clarified that entry was only permitted to those who “have IQs above 140, or above 120 but with super good experiences or projects.”
Those who made the cut would have their startups indefinitely bankrolled and would be housed in a San Francisco mansion (with the added bonus of a backhouse for family members with lower IQs).
My heart sank. Worrying that none of my family members were smart enough to get me into the shed, I feared I would have to find another way to secure my spot in the future. David clearly sensed this: as we spoke, his eyes darted back to the table. Before I lost him completely, I asked how he imagined the future. He told me that we were on the precipice of another industrial revolution.
A line had formed at the booth, and David told me he had to leave. Minutes later, a crowd of engaged 40-something-year-olds surrounded the table — apparently just as worried that the future would pass them by.
I made my way outside and texted Akshay Madhani, who I had met briefly at the tail end of a presentation. He was here to promote his company Scrollmark, which analyzes social media metrics and makes content recommendations for aspiring creators. After a brief chat about his product, my growing anxiety got the better of me. I told him about NobelEra, and asked if he believed that the future could really be so dark.
“There’s a very high likelihood that AI will increase the disparities between the haves and the have-nots,” Madhani said. “A lot of the jobs that exist right now are going to go away pretty quickly. Especially in the white collar.”
I pushed him for advice on how to survive this impending crisis.
“Your job is not going to be taken by AI. It’s going to be taken by somebody who knows how to implement, control, maintain and use AI,” Madhani said. “So be that person.”
So that was it. For all the rhetoric of abundance that has characterized the early optimism of AI proponents, the internal reality looked more like a race against obsolescence.
Desperate for another perspective, I made my way back up to the third floor. A lull between speakers gave way to another feeding frenzy as attendees, ranging from industry veterans to startup hopefuls and entire families, crowded anyone with a veneer of insider knowledge. They all asked questions breathlessly, so as to fit in as many words as possible. At one point, a high school student shared concerns that his startup hadn’t yet taken off.
On the patio outside, a mob had gathered around Vice President of Research at Google Deepmind Ed Chi. Chi was hosting an impromptu Q&A, and the crowd clambered to get in as many questions as possible. Someone asked what the best AI-driven future would look like, and Chi was prepared with an answer.
“Everyone is going to walk around with a strap-on augmented brain, and they’re all going to be two times smarter than I am.”
Here was the abundance.
Fuck
...
I left shortly after.