What the ‘OpenAI State of Enterprise AI 2025’ Report Means
TLDR: OpenAI’s State of Enterprise AI – 2025 Report shows three clear patterns.
Usage is scaling fast, AI is saving people real time across common tasks, and a gap is opening between organisations that integrate AI into daily work and those that stay at surface level. If you are planning AI over the next 12–24 months, you should treat this report as a reality check on where value is already showing up and where you are likely behind.
What this report covers
The report draws on:
- De-identified usage data from more than 1 million business customers
- A survey of about 9,000 workers across almost 100 organisations
- Case studies from finance, healthcare, technology, retail, and manufacturing
So this is current behaviour, not a forecast slide. It shows where AI is already embedded in work, and where it is still an experiment.
How AI use is changing inside organisations
The first pattern is scale and intensity.
Over the last year:
- ChatGPT enterprise message volume grew around 8x
- API reasoning token use per organisation increased about 320x
- Use of Custom GPTs and Projects grew about 19x, and now handle around 20% of all enterprise messages
In plain terms, more people are using AI more often, and more of that use is tied to repeatable tasks rather than one-off queries.
AI is moving from “something staff try” to “infrastructure that supports recurring work”.
Where organisations are seeing value
The second pattern is impact on day-to-day work.
The survey data shows:
- Around 75% of workers say AI improves speed or quality of their output
- Typical time saved is 40–60 minutes per active day
- Data, engineering, and communications roles report 60–80 minutes saved
- Accounting, finance, analytics, communications, and engineering see the strongest gains
Workers also report they can now do tasks that were previously out of reach, including coding, spreadsheet automation, and technical troubleshooting.
For an executive reading this, the practical point is that AI is lifting capability at task level, not only speeding up existing work. That shows up as fewer handoffs to specialists and faster cycle times.
Where the gap is opening
The report makes the maturity gap very clear.
When OpenAI compares “frontier” workers (95th percentile of usage) with the median worker, it finds:
- Frontier workers send about 6x more messages
- They use data-analysis tools about 16x more
- Coding messages show the biggest gap, with frontier users sending 17x more messages than the median
- At organisational level, “frontier” firms generate roughly twice as many messages per seat, and about seven times more messages to GPTs, than the median.
The more task types a worker uses AI for, the more time they report saving. People who use AI across roughly seven task categories report around five times more time saved than those who sit around four.
From a strategy point of view, this matters. Buying access to a tool is not the differentiator.
Depth of use across real tasks is.
The case studies in the report give some grounded examples:
- A support platform uses AI to cut phone latency by almost half and resolve over half of calls without escalation
- A home-improvement retailer uses AI assistants online and in-store, doubling conversion when customers use the assistant and lifting satisfaction scores
- A bank automates thousands of legal authority checks a year, freeing scarce legal capacity for higher-value work
- A health insurer answers most benefits questions instantly and automates a large share of messages
- A biotech company reduces a critical evidence-review step from weeks to hours when drafting product profiles
None of these use cases rely on exotic ideas. They all attack slow, repetitive, information-heavy parts of the work.
The lesson is:
Start with processes that are text-heavy, rules-based, and already documented, then apply AI where humans are mostly reading, summarising, or drafting.
How you should use these findings
If you are responsible for planning AI, this report is a benchmark.
Three useful questions to ask against it:
- Usage: How often are people actually using AI during the week, and for how many task types? If it is only for writing or summarising, you are behind the frontier group.
- Workflows: Where have you embedded AI into a defined process, rather than leaving it to personal experimentation?
- Readiness: Have you enabled secure data access, clear guidance, evaluation, and basic training, or are people working around gaps with ad-hoc use?
In our client work, the organisations that move fastest are the ones that treat this as operational change, not as a side project. They pick a small number of priority workflows, define the guardrails, and measure outcomes in minutes saved, error rates, and throughput.
FAQ's
Q: If depth of use is the differentiator, how do I work out which tasks should expand first?
Map where staff already use AI even occasionally. Productivity rises when more tasks are included, so expand from existing behaviour, not abstract use cases.
Q: How do I judge whether our usage numbers are healthy or behind?
Compare how many recurring workflows (not tasks) involve AI. The report’s frontier firms use AI inside defined processes. If you only see ad-hoc use, you’re behind.
Q: What’s the simplest evidence to show an executive team that AI is delivering value?
Measure cycle time. The report links value to minutes saved and faster task completion. If your key processes aren’t measurably faster, your deployment isn’t mature.
Q: What should I stop doing based on this report?
Stop treating “experiments” as progress. The data shows value comes from embedding AI into repeatable work. Small pilots without integration don’t move outcomes.
Q: How do we avoid becoming one of the median organisations falling behind?
Assign ownership. Frontier organisations in the report build internal capability, standardise workflows, and monitor usage. Without ownership, depth never develops.
If you want help turning these patterns into a concrete plan, our AI Strategy Blueprint takes this kind of evidence and turns it into a small set of priority workflows, guardrails, and capability steps that fit your organisation’s actual constraints.
