The Synthetic Singularity

15 min read

My attempt at a Strategic Letter to Business Leaders on the Generative AI Revolution 2022 to 2025 and the Agentic Future 2026 to 2028

Let me start personally. In the last three years, I have had this weird experience of watching reality catch up to things I used to file under science fiction. Not someday science fiction. I mean, I watched it land in my calendar and my budget cycles.

Between November 2022 and November 2025, we went from "have you tried this fun chatbot" to "this agent just refactored a legacy codebase I have been afraid to touch for ten years" and "that robot over there is on its fourth ten-hour shift this week."

I have led technology teams for a long time. I have run significant transformations. Nothing comes even close to this in terms of raw speed and depth of change.

I am genuinely excited about it. It lights up the builder part of my brain. It answers a lot of the questions I have wrestled with for two decades about waste, friction, and the gap between what systems should do and what they actually do.

At the same time, I would be lying if I said there was not a knot in my stomach. Because underneath the cool demos, this wave is chewing up entry-level careers, shifting power between countries, and pushing us right up against physical limits like power, water, and grid capacity.

So I want to walk you through what just happened, what is happening now, what is coming, and where leaders like us need to stop being polite and start being very intentional.

A Three-Year Metamorphosis

Awakening

I remember the first week after ChatGPT launched in November 2022. My inbox filled with "you have to see this" links: friends, developers, executives, my own kids. Everyone was suddenly talking to this thing.

Before that, large language models were something you heard about in research papers. Overnight, they were in the browser of every curious human on earth.

Yes, it hallucinated. Yes, it got things wrong. It had no real sense of time. But it did something new. You could ask it, in plain language, to explain a complex topic, draft a policy, sketch a piece of code, or write a poem about your dog, and it responded in a way that felt uncannily fluent.

GPT 4, in early 2023, raised the bar again. Meta pushed Llama 2 into the world, and suddenly, the open-source community had serious fuel.

Inside enterprises, though, the vibe was very different. I sat in more than one room where a senior leader said something like, "This is amazing, and also, there is no way I am letting this thing touch production."

They were right to be cautious. Models hallucinated. Evaluations were immature. The gap between a mind-blowing demo and a trustworthy workflow was massive. We got stuck in pilot purgatory. It felt like the technology sprinted ahead while our governance, processes, and people were jogging behind, slightly out of breath.

Integration

Then 2024 hit, and the models started seeing, hearing, and remembering.

We moved from text-only novelty to multimodal utility. These systems could watch hours of video, read huge legal archives, traverse messy codebases, and talk back to you in voice.

Context windows exploded. We went from "please paste no more than thirty-two thousand tokens" to "go ahead and give me your entire knowledge base." I remember the first time I pushed a truly disgusting legacy document set into a model with a million token context and got back a coherent summary. That felt like a line-crossing moment.

At the same time, prices fell. When the cost per million tokens drops into the single digits, this stops being exotic and becomes infrastructure.

That is when my excitement and my worry both spiked.

Excitement because I could finally see how you might let an AI system sit across a whole process, not just one little step. Worry because very few organizations were ready for what it means to point something that powerful at their real data, policies, and customer flows.

Agency

Late 2025 is where I mentally draw a heavy line.

Up to that point, we had very clever assistants. They waited for us. We asked. They answered.

Agentic AI is different. These systems plan. They break problems into steps. They call them tools and APIs. They loop and correct themselves. They decide what to do next within the guardrails and goals you set. In other words, they act like junior colleagues who are eager, fast, occasionally wrong, and unbelievably persistent.

Claude Opus 4.5, GPT 5.1, and Gemini 3 Pro defined that new frontier.

Claude punched through 80% on the SWE bench. That put it firmly into "take this ticket and come back with a pull request" territory. GPT 5.1 leaned hard into reliability, careful refactors, and security-aware migrations, which is precisely what keeps enterprise leaders up at night. Gemini leaned into huge context and multimodal search, which is what you need when your problem is "I have ten years of messy video, documents, and logs, please find the signal."

Watching this unfold, I had this weird mix of awe and seriousness. Awe because the level of capability is objectively insane. Seriousness because this is the first time I have looked at software and thought, 'this is no longer just something people use; it is now something you have to manage as if it were a worker with real responsibilities.'

From Tool to Worker

Here is where the GenAI divide is opening up.

Most companies are still treating AI like an upgraded autocomplete. They drop it into chat panels and ask it to draft emails or summarize meetings. It is helpful. It saves some time. But it is still fundamentally "you prompt, it responds."

The small group that is quietly pulling away has already crossed into "AI as worker" territory.

In healthcare revenue cycle, I have watched agents log in to payer portals, click through screens, read policy language, assemble documentation, submit prior authorizations, and appeal denials. They are not suggesting what a human might do. They are doing the thing.

In engineering, agents read entire repositories, map out dependencies, update code, write tests, and open pull requests that senior engineers review and merge. The human role is shifting toward architecture, system design, and quality control.

In customer operations, I have seen agents run diagnostics, change account settings, credit accounts, and close tickets end-to-end with no human in the loop until there is an edge case.

As someone who has been chasing efficiency and automation for years, this is the stuff I have been waiting for. But it comes with a catch. Once you thread autonomous workers into the middle of your value stream, you are no longer dabbling. You are changing the anatomy of your company. Org charts, risk models, compliance structures, and culture all have to adjust.

That is exciting. It is also very easy to underestimate.

Why I Am So Energized

On the positive side, I am more optimistic about what an enterprise can become than at any point in my career.

I can finally see an operating model where:

  • Humans spend more of their time on judgment, relationships, creativity, and strategy.
  • Digital agents handle the grinding work of reconciliation, data entry, compliance checks, integration glue, and endless status updates.
  • Small and mid-sized organizations get access to capabilities that used to belong only to global giants.

That last one really matters to me. Renting intelligence changes the power dynamics. If you know your business deeply and you know how to design good workflows, you can now compete with companies that have ten times your headcount.

On a more personal level, there is something almost joyful about watching a system finally do the tedious, painful work that has frustrated you for years. The world has a lot of brilliant people doing very dumb repetitive tasks. This wave is a chance to fix that.

But the very forces that make me this optimistic also make me cautious.

The Risks We Have To Name Out Loud

Here are some of the things that keep me from turning this into a pure cheerleading note.

1. First, the hollowing of early careers.

You can already see it in hiring data and in job boards. Demand for seniors is strong. Demand for juniors is falling. It makes short-term financial sense to let agents handle "junior work" and keep a small number of seniors to supervise. The long-term consequence is that you destroy your own bench. No juniors means no future seniors. That is a slow-burning risk that will show up years from now when you suddenly realize there are not enough experienced architects in the market.

2. Second, loss of control and invisible failures.

Agents are not static tools. They are dynamic systems that take action. A misconfigured agent can quietly escalate a destructive pattern across thousands of accounts, misapply a policy, or leak sensitive data through a forgotten integration. The fear is not just rogue superintelligence. The near-term fear is boring, ugly, expensive mistakes that no one notices for months because everyone assumes "the system" handled it.

3. Third, fragility in the stack.

We are increasingly focusing our critical work on a small set of models, chips, and clouds. When you centralize that much on such a small base, you create new single points of failure. A regulatory change, a supply chain shock, or a major outage at one vendor is no longer an inconvenience; it is a full-body hit to your operations.

4. Fourth, geopolitical fracture.

The split between the United States, the European Union, and China is real. Model behavior, available features, and legal obligations are diverging. If you are running a global business and your architecture assumes a single global AI fabric, you are going to have some hard conversations with your lawyers and regulators in the next few years.

5. Fifth, energy and environment.

We are building an intelligence layer that eats electricity. Data center power demand is climbing fast. In many regions, the limiting factor for AI is not model design but years-long timelines to secure grid capacity. That is why you suddenly see hyperscalers talking seriously about nuclear and advanced energy.

If you build strategies that assume infinite cheap compute, you are building on sand.

I do not share any of this to scare you. I share it because the only grown-up way to engage with this wave is to hold both the upside and the downside in your hands at the same time.

The New Geopolitics Of Intelligence

We now live in a world where AI policy is a core part of national strategy.

The United States has leaned into acceleration and global dominance. The European Union has leaned into protection and rights. China is pushing innovation under hardware constraints.

For a global firm, that means you will end up with sovereign AI clouds whether you like it or not. Data generated in Europe will remain in Europe and be processed by models that comply with the AI Act. More unconstrained models will process data in the United States. Some features you roll out in one market will be illegal in another.

This can be healthy if you face it early. It forces real architectural thinking. You have to decide where your data lives, which models touch what, and how agents are governed in each region.

The risk is pretending we are still in the old world, where you build once in one cloud and serve everyone the same way. That world is fading.

The Physical Limits Of The Cloud

The "cloud" sounds airy. In reality, it is power-hungry buildings full of hot silicon.

We are now at the point where building new AI capacity is as much a civil engineering and energy planning problem as it is a software problem. Nvidia keeps pushing performance with new architectures. Governments and cloud providers are building clusters that look like national infrastructure.

But the gate is the grid. In many locations, you can build a data center faster than you can secure the power lines for it.

For leaders, this matters because it means your AI road map is now entangled with power markets, permitting timelines, and energy policy. That is a strange sentence to write, but it is true.

The Robots Have Clocked In

2025 will be remembered as the year AI got legs and walked into work.

Tesla Optimus, Figure 02, Agility Digit, and others have gone from slick demo videos to actual, repetitive, reliable work in factories and warehouses. They move totes. They carry parts. They do jobs that human beings do not want or cannot fill fast enough.

I am more optimistic than fearful here, at least in the near term. A lot of these roles have been chronically understaffed. If a robot can safely do a 12-hour repetition-heavy shift and let a human move into higher-level work, that can be a win for everyone.

Where it gets serious is the integration work. Safety. Procedures. Training. Culture. When your team members share a floor with humanoid robots, your leadership responsibilities change.

The Next Three Years

So, where does that leave us looking out to 2026 through 2028?

Here is my honest view.

AGI-level capability is likely to appear within this window. I do not get hung up on the label. What matters is that we are on track to see systems that can handle most economically valuable cognitive tasks at or above human level, across multiple domains, with minimal oversight. That will force every board to confront questions about control, liability, and purpose.

Sovereign AI will become the default. Your AI stack in Europe will not look like your AI stack in the United States. You will have to manage that complexity.

Agentic economies will start to form. Buyer and seller agents will negotiate contracts. Supply chain agents will coordinate orders. Compliance agents will watch other agents. Legal language will evolve to include failures of autonomous digital workers. Insurance products will emerge to cover "AI incidents."

That sounds wild. The truth is, we are already seeing early prototypes of all of this.

The Call For Leaders

The three years from 2022 to 2025 were the awakening. We discovered that synthetic intelligence is real, accessible, and incredibly potent.

The next three years are about integration and autonomy.

We are not just building chatbots. We are building a synthetic labor force that can think, read, plan, and act across your entire enterprise stack, and even in the physical world.

Here is my challenge to you.

Stop treating AI like a side quest.

Stop hiding it in innovation labs.

Start redesigning your operating model around agent-centric workflows.

Start investing in the data, energy, and governance foundations that will let you scale without blowing yourself up.

Start having honest conversations with your teams about how roles will change, who will be displaced, and how you will build new talent pipelines in a world where the lowest rung on the ladder is moving.

I am excited about this moment. Very excited. In a world that now moves at the speed of algorithms, indecision is not neutral. It is a decision to fall behind.

You do not have to sprint unquestioningly. But you do have to move.

Enjoyed this article?

Subscribe to get the latest insights on AI strategy and digital transformation.

Get in Touch