9 leading AI strategy reports scored across 6 dimensions. Compare their outlook, then chat with the data.
Read the full analysis ↓Q1 is a time for renewing gym memberships, bemoaning dry January and hunkering down through the dark winter nights. It is also peak season for AI predictions, year ahead outlooks and wild speculation. This year we have had a bumper crop with significant contributions from OpenAI, Anthropic, McKinsey (twice), Accenture, BCG, Bain, EY and IBM.
These are the organisations at the centre of what is happening in AI right now, which should make them best placed to say where it is heading and what businesses should do about it. Between them they have surveyed tens of thousands of executives and workers across every major industry.
Even so, it's hard to believe even the most die-hard AI fanatic has had chance to read them all. This is a shame because they do contain real value. Reading one or even two however could provide a distorted picture, as they each have different focuses and inherent biases (more on that to follow).
This article creates a synthesised view of all nine, accompanied by a simple scoring framework to map where they overlap and where they diverge, followed by ten key takeouts from the collection. It finishes with three practical actions any business could apply today.
To compare nine reports with different audiences, different definitions of success, and different commercial incentives, we built a simple framework. Each report is scored from 1 to 10 across six dimensions based on its claims and recommendations. These are not quality ratings. They are a map of positioning.
The six dimensions are:
The first thing these charts reveal is how much agreement there is. Composite scores sit in a tight band between 4.2 and 7.5, a 33-point spread on a 100-point scale. For nine reports written by competing organisations, that is a narrow range. A generous reading is that the industry is converging on a shared understanding. A more critical one is that there is some degree of groupthink at work.
Accenture's radar has the largest area and the highest composite score (7.5). Which makes sense. Accenture offers a broader range of services than any other author in the set, so it has a commercial reason to cover every dimension rather than weighting one over the others.
OpenAI scores 8 on Tech, as you would expect from a technology company. Anthropic scores significantly lower (5). That gap is worth noting. It could reflect a more holistic view of what businesses need in 2026, or simply a greater confidence in the technology itself, freeing them to spend more of their report on adoption, agents, and organisational readiness rather than selling the capability.
The smallest radar belongs to McKinsey's State of AI, with a composite of 4.2. It is the only report that scores below 5 on four of six dimensions (Value 2, Urgency 4, Tech 3, Growth 4). Where the rest of the set is largely prescriptive — here is what AI can do, here is why you should act now — McKinsey's State of AI is descriptive. It reports what is happening across the broadest enterprise dataset in the set, with limited hyperbole. That restraint makes it the closest thing to a true outlier in this cohort, and arguably the most useful counterpoint to the unbridled optimism elsewhere.
This is the single biggest area of agreement and disagreement in the entire set. All nine reports accept that AI is generating measurable returns somewhere. They just disagree on how widespread those returns are. Anthropic's survey of 500+ technical leaders found 80% reporting measurable economic returns from AI agents. Accenture's data shows the strongest performers achieving 2.2x revenue growth and a 37% EBITDA lift. On the other side, McKinsey found that while 88% of companies are using AI, only 5.5% say more than 5% of their EBIT comes from it. It would require a deeper dive into data sources to establish who has the most credible claim, but perhaps highlights a key limitation of these types of reports in general.
The numbers tell this story clearly. Consultancies give an average score of 4.4 for AI Value whereas Platforms average 8.3, nearly a 100% increase. On Org Change, it flips: platforms average 3.0, consultancies average 6.0. Why? One reason might be that these findings suit the respective business models. OpenAI and Anthropic are in the business of selling AI. A report demonstrating strong adoption and clear returns is a sales asset. It goes to investors, procurement teams, the press. McKinsey and Bain make money when businesses face complex transformation challenges that need expert help. A report showing that AI adoption is messy and incomplete serves their commercial model just as well. Neither incentive makes the data wrong. But knowing this is basic due diligence for anyone reading these reports — always best to take with a pinch of salt.
Every report in the set mentions AI agents or agentic AI. The language varies: OpenAI calls them “core infrastructure,” McKinsey describes “the agentic organisation,” Bain outlines four levels of maturity from basic information retrieval to “multi-agent constellations.” But the direction is consistent. AI is moving from something you ask questions to something you give goals. It searches, writes, analyses, and decides, coming back to a human when it hits a judgment call. Anthropic found that 57% of organisations are already running multi-step agentic workflows, with 81% planning more complex use cases in 2026. BCG found that agents already account for 17% of total AI value in 2025, expected to reach 29% by 2028. If you're talking about AI in 2026, you probably need to be talking about agentic AI.
A year ago, the conversation was about which AI model is best. All nine reports have moved past that. The constraint now is everything around the model: whether your data is accessible, whether your systems can talk to each other, whether your processes are designed to let AI operate within them. Anthropic found that the top barriers to scaling AI agents were integration challenges, data readiness, and change management. Not model capability. Bain found the same: three out of four companies said the hardest part of AI adoption was getting people to change how they work, not getting the technology to function.
This comes through in almost every report. BCG found that only 5% of companies globally qualify as “future-built” for AI, but that this minority is pulling further ahead. Bain found that AI leaders are improving EBITDA by 10% to 25%, while laggards are falling behind. Accenture's data shows the strongest performers achieving 2.2x revenue growth and a 37% EBITDA lift. McKinsey's numbers tell the same story from the other end: only 39% of companies report any bottom-line impact from AI at all. Is this narrative designed to create FOMO? Maybe. Are you willing to bet the farm on this and risk being left behind?
Every report acknowledges the time and cost savings AI delivers. OpenAI found workers saving 40 to 60 minutes a day. But the more important finding, repeated across the set, is that the businesses generating the strongest returns have moved beyond efficiency into new capability. EY argues that the companies treating AI as a growth engine rather than a cost-saving exercise are the ones creating distance from competitors. The value is growing fastest in areas that were not possible before, not in areas where existing work is just being done more cheaply.
No report in the set argues that humans become irrelevant. What they all describe is a change in what humans do. A good analogy is imagine humans moving from being the writer to the editor — no longer concerned with the production and delivery, but providing the oversight and the exceptions. McKinsey's Agentic Org paper puts it in concrete terms: a human team of two to five people can already supervise 50 to 100 specialised AI agents running an end-to-end process. They project AI systems could complete four days of work without supervision by 2027, with the length of tasks AI can handle doubling every seven months since 2019. BCG found that 43% of heavy AI adopters expect growing demand for generalists who can manage human-agent teams, while 29% expect fewer traditional entry-level roles. There are unlikely to be many roles that will be free from AI disruption. The challenge for individuals and leaders is how well they can adapt. And businesses too.
McKinsey's Agentic Org paper scores 10 on Org Change, the highest single score in the index. OpenAI scores 2. The question is not whether AI changes how organisations work — every report accepts that it does. The question is how much structural change is needed, and how soon. McKinsey argues for fundamental restructuring: small human teams supervising large fleets of specialised agents. Bain takes the opposite approach: fix the foundation first, then let the structure follow. The gap between those two positions is where most businesses will need to place their bet.
Bain scores 9 on Foundation First. Their argument is simple: the difference between companies getting real returns and companies stuck in perpetual pilot mode is almost entirely about groundwork. Data accessibility. System interoperability. Governance embedded into processes. Clear ownership of who is accountable for what. This is not a new finding, but its consistency across the set is notable. Even the platform reports, which score lower on this dimension, acknowledge that poor data and disconnected systems are among the top barriers to adoption. The reports just differ on how much emphasis to place on it versus pushing ahead with the technology that already exists.
Seven of nine reports score 6 or above on Urgency. EY, focused specifically on the technology sector, puts it most bluntly: the pace of AI innovation now makes speed of execution the single most important competitive variable. Accenture and BCG push hard on the same point, with BCG projecting that AI agent value will nearly double between 2025 and 2028. McKinsey's Agentic Org paper notes that task length in AI is doubling every four months since 2024. Bain estimates that as much as half of overall technology spending could eventually go to AI agents. The collective view is that the window for building AI capability while competitors are still figuring it out is closing fast.
While many of us say that “every day is different,” I think if we were honest with ourselves there is probably more repetition than we let on. Challenge yourself to take a zoomed-out view of your working day, week, or month and identify the end-to-end process that you or your team are doing most often or that takes up the most time. Take this process and strip it back to its core components: what are the inputs, what are the junctures, and what are the outputs. Then rebuild it so it can be executed by AI and overseen by a human.
This is the single action that most consistently separates the businesses in McKinsey's 39% (those reporting bottom-line impact) from the other 61%. The difference is not which tools they bought. It is whether they changed how the work gets done. The early wins cluster in predictable places: data-heavy, high-volume, repeatable work. Software delivery. Knowledge management. Customer onboarding. Internal process automation.
You don't want to hear it. I don't want to have to say it. But I'm afraid the quantity, quality, and interconnectivity of your data is going to be a key determinant of whether you can deliver value from AI in 2026. This means data quality governance, approval flows, ownership rights, process maps, escalation points — all that fun stuff. The list goes on.
Bain scores 9 on Foundation First for a reason: their data shows the difference between leaders and laggards is almost entirely about operational readiness, not ambition. Anthropic's survey supports the same pattern: the top barriers to scaling AI agents are integration challenges, data readiness, and change management. These are not glamorous problems, but they determine whether subsequent AI investment delivers or stalls.
In action one we talked about an end-to-end process being executed by AI and overseen by a human. This asks you to think about what your business would look like if all your processes ran this way. What would this mean for the capability, skill sets, and profiles of the people in your organisation? And think about the opportunity this presents — could your business 2x, 10x, 100x productivity by deploying an orchestra of agents?
Bain estimates half of tech spending could eventually go to agents. If that is even directionally right, the question of what people do cannot be deferred. It is better to design that answer deliberately than to have it forced upon you.
These nine reports tell a fairly consistent story about direction and a divided one about pace and current impact. AI is shifting from a tool that helps people work to an operating layer that runs processes, with people supervising rather than executing. That much, they claim, is settled.
What is not is how quickly this is happening, how much structural change it requires, and how ready most businesses are. What reading all nine back to back makes clear is that the debate has moved on. The question is no longer whether AI will reshape how businesses operate. It is how quickly, how deeply, and who moves first. The window to build capability while competitors are still figuring it out is still open. It won't stay open long.
Query all 9 reports. Powered by Claude Haiku.