Six Decades of AI: So What Happens Now?

Ryan Flanagan
Nov 28, 2025By Ryan Flanagan

TLDR: Ray Kurzweil’s TED talk is a fast tour through 60 years of AI progress. Some of his predictions stretch into science fiction, but the underlying pattern he highlights is hard to ignore. Compute keeps compounding, capability follows, and organisations that treat AI as a long-term capability, not a novelty, avoid the most painful missteps. This piece translates his history and claims into the parts that matter for anyone responsible for making decisions in the next five years.

Why does half a century of AI progress matter today?

Ray Kurzweil (he joined the AI field in 1962) is one of the OG's of AI, he did a Ted Talk where he opens with a line that should unsettle anyone making operational decisions: “I’ve been involved with AI for 61 years.”

In the early 1960s, mentioning AI triggered the response: “What’s that?”

Today it triggers vendor demonstrations, promises of productivity, and a rush to deploy models most teams barely understand. The disconnect is obvious. The history is deep; the organisational maturity is shallow.

The past matters because it forces a correction. AI has always advanced gradually, then appeared sudden. The last two years only felt like a shock because most organisations weren’t watching the underlying trend. The next shock will be worse if this context is ignored.

What is Kurzweil’s strongest point?

The future predictions are colourful. The immediate lesson is not. He shows a single chart he has tracked for decades. It maps calculations per second per dollar, from 0.000007 in 1939 to half a trillion today. He calls it a 75-quadrillion-fold increase.

The point is simple.

We didn’t get large language models because someone had a bright idea.
We got them because compute power and scale and cost finally crossed a threshold. This distinction matters. It explains why:

  • models failed three years ago
  • models worked two years ago
  • models are accelerating now

So now, capability follows scale, not optimism.

Organisations that still treat AI as a “wait and see” topic are making a category error. The curve won’t pause while they tidy their SharePoint folders and run a few PowerPoint Copilot designs.

Where does Kurzweil does drift into speculation is where he lists areas where AI is already accelerating real work: 

  • Digital drug simulation.
  • Protein modelling.
  • Compressed discovery cycles.

These rest on real infrastructure and measurable progress. Then the talk veers into longer-term claims: nanobots, immortality, “11 dimensions”, expanded consciousness. Interesting, but irrelevant to your next budget cycle.

The danger isn’t Kurzweil’s futurism.

It’s the way leaders sometimes mix grounded progress with speculative visions and then design strategy around both. That produces confused priorities, unrealistic expectations, and a queue of stalled pilots.

Separating what is real from what is performative is now a core leadership skill.

What does the last 60 years tell us about the next five?

Three patterns matter more than any prediction date.

First, AI progress is uneven but directional.
It moves in jumps when compute unlocks new capability. The rhythm is unpredictable, but the trend is not.

Second, the bottleneck is no longer the model.
It is your data, your processes, your talent, and your governance. Kurzweil makes this clear indirectly. AI accelerates when the underlying infrastructure exists. Organisations lag when theirs does not.

Third, judgement rises in value as tools automate knowledge work.
Kurzweil talks about intelligence increasing. What increases first is the need for people who can tell good outputs from bad ones, manage risk, and make decisions under uncertainty.

AI compresses knowledge.
It does not compress responsibility.

What breaks if organisations (or you) ignore the curve?

Kurzweil’s timelines don’t need to be right for this question to matter. If capability keeps climbing, even at the conservative end:

  • existing workflows will become misaligned
  • oversight will struggle to keep pace
  • shadow AI becomes a governance problem
  • teams will rely on models they don’t understand
  • vendors will exploit the capability gap
  • critical decisions will be made on unverified outputs

This is not an existential threat. It is a practical one. Most failures in AI programmes come from rushed adoption layered on top of unclear processes and unstructured data. The fix starts long before a model is deployed.

So what should teams focus on now?

Kurzweil’s history gives a clear signal: capability will keep shifting, even if predictions vary. The only sustainable response is preparing the environment the capability lands in. That means:

clarifying intent
mapping where AI can improve throughput, accuracy or service
understanding data quality and availability
addressing skills gaps early
building governance that can survive fast-moving tools

None of this depends on AGI dates or futurist claims. It depends on taking the long view seriously enough to act in the short view. 

One irritation here.

A surprising number of organisations still treat AI planning like optional homework, then wonder why pilots collapse. The work isn’t glamorous, but it prevents rework later...just like any other new technology does.

If you want to turn long-term AI signals into practical decisions today, the AI Strategy Blueprint gives you a structured path to align intent, data, governance and delivery without overreach.

FAQs

Q: If capability jumps again in the next 12 months, how do we stop our current AI plans becoming obsolete before implementation even finishes?
A: Treat plans as living documents. Build review cycles into governance. Most AI programmes fail because the strategy freezes while the tools evolve.

Q: How do we distinguish genuine capability signals from exaggerated vendor claims when both arrive packaged in the same language?
A: Force vendors to show the workflow impact, not the feature list. If they can’t map the model to a task, decision, or measurable outcome, the capability is unproven.

Q: What does “judgement rises in value” mean in practical terms?
A: You will inherit more outputs than you can easily verify. The skill is not using the tool. The skill is knowing when not to trust it, and how to escalate doubt inside your organisation.

Q: How do we prepare staff when AI tools change faster than internal training cycles can keep up?
A: Stop training on tools. Train on behaviours, oversight steps, verification habits, and escalation paths. Those survive version changes.

Q: If compute is the real driver, how do we plan financially when hardware cycles outpace budget cycles?
A: Assume costs will drop but model sizes will grow. Budget for usage variability, not static prices. Forecast by workload, not model.

Q: Where should boards intervene first if they’re late to this conversation?
A: Start with clarity of intent and risk appetite. Without both, teams will adopt tools ad hoc and governance becomes performative.

Q: How do we prevent teams from quietly adopting AI before guardrails are ready?
A: You won’t prevent it. You surface it. Create safe disclosure pathways. Punishing early adopters only drives usage underground.

Q: What is the single biggest blind spot organisations carry into AI planning?
A: They underestimate the gap between “tool available” and “tool embedded”. The distance between those two stages is where most rework lives.

Q: Is there any credible way to forecast which AI capabilities land next?
A: Yes, but not through futurism. Watch compute announcements, research scaling papers, and inference cost curves. Those tell you more than timelines in talks.

Q: What happens if we wait another year?
A: You’ll still need data readiness, governance, and role clarity. Waiting doesn’t remove the work. It only compresses it into a smaller window with less margin for error.