Rix Digital | Fast, affordable Marketing Mix Modelling powered by AI · Learn more →

AI Strategy Index 2026

9 leading AI strategy reports scored across 6 dimensions. Compare their outlook, then chat with the data.

By Matt Rix, Founder · Updated
Read the full analysis ↓
Select reports above to compare on the radar chart
Selected reports
Select a report above to see its full breakdown and score justifications

Introduction

Q1 is a time for renewing gym memberships, bemoaning dry January and hunkering down through the dark winter nights. It is also peak season for AI predictions, year ahead outlooks and wild speculation. This year we have had a bumper crop with significant contributions from OpenAI, Anthropic, McKinsey (twice), Accenture, BCG, Bain, EY and IBM.

These are the organisations at the centre of what is happening in AI right now, which should make them best placed to say where it is heading and what businesses should do about it. Between them they have surveyed tens of thousands of executives and workers across every major industry.

Even so, it's hard to believe even the most die-hard AI fanatic has had chance to read them all. This is a shame because they do contain real value. Reading one or even two however could provide a distorted picture, as they each have different focuses and inherent biases (more on that to follow).

This article creates a synthesised view of all nine, accompanied by a simple scoring framework to map where they overlap and where they diverge, followed by ten key takeouts from the collection. It finishes with three practical actions any business could apply today.

The AI Strategy Index 2026

To compare nine reports with different audiences, different definitions of success, and different commercial incentives, we built a simple framework. Each report is scored from 1 to 10 across six dimensions based on its claims and recommendations. These are not quality ratings. They are a map of positioning.

The six dimensions are:

  • Value. How much return does the report claim businesses are already getting from AI?
  • Urgency. How strongly does it push for immediate action?
  • Tech. Does it frame the answer as better tools, platforms, and models?
  • Org. How much does it emphasise workforce and structural transformation?
  • Data. How much emphasis on getting data, systems, and governance right before scaling?
  • Growth. Is it talking about new revenue and differentiation, or primarily about efficiency?

The first thing these charts reveal is how much agreement there is. Composite scores sit in a tight band between 4.2 and 7.5, a 33-point spread on a 100-point scale. For nine reports written by competing organisations, that is a narrow range. A generous reading is that the industry is converging on a shared understanding. A more critical one is that there is some degree of groupthink at work.

Accenture's radar has the largest area and the highest composite score (7.5). Which makes sense. Accenture offers a broader range of services than any other author in the set, so it has a commercial reason to cover every dimension rather than weighting one over the others.

OpenAI scores 8 on Tech, as you would expect from a technology company. Anthropic scores significantly lower (5). That gap is worth noting. It could reflect a more holistic view of what businesses need in 2026, or simply a greater confidence in the technology itself, freeing them to spend more of their report on adoption, agents, and organisational readiness rather than selling the capability.

The smallest radar belongs to McKinsey's State of AI, with a composite of 4.2. It is the only report that scores below 5 on four of six dimensions (Value 2, Urgency 4, Tech 3, Growth 4). Where the rest of the set is largely prescriptive — here is what AI can do, here is why you should act now — McKinsey's State of AI is descriptive. It reports what is happening across the broadest enterprise dataset in the set, with limited hyperbole. That restraint makes it the closest thing to a true outlier in this cohort, and arguably the most useful counterpoint to the unbridled optimism elsewhere.

Ten key takeouts

  1. AI is delivering value to businesses… or is it?

    This is the single biggest area of agreement and disagreement in the entire set. All nine reports accept that AI is generating measurable returns somewhere. They just disagree on how widespread those returns are. Anthropic's survey of 500+ technical leaders found 80% reporting measurable economic returns from AI agents. Accenture's data shows the strongest performers achieving 2.2x revenue growth and a 37% EBITDA lift. On the other side, McKinsey found that while 88% of companies are using AI, only 5.5% say more than 5% of their EBIT comes from it. It would require a deeper dive into data sources to establish who has the most credible claim, but perhaps highlights a key limitation of these types of reports in general.

  2. Platforms say there is a lot of value being delivered by AI, consultancies say otherwise

    The numbers tell this story clearly. Consultancies give an average score of 4.4 for AI Value whereas Platforms average 8.3, nearly a 100% increase. On Org Change, it flips: platforms average 3.0, consultancies average 6.0. Why? One reason might be that these findings suit the respective business models. OpenAI and Anthropic are in the business of selling AI. A report demonstrating strong adoption and clear returns is a sales asset. It goes to investors, procurement teams, the press. McKinsey and Bain make money when businesses face complex transformation challenges that need expert help. A report showing that AI adoption is messy and incomplete serves their commercial model just as well. Neither incentive makes the data wrong. But knowing this is basic due diligence for anyone reading these reports — always best to take with a pinch of salt.

  3. Agents, agents, agents. Everyone is talking about agents

    Every report in the set mentions AI agents or agentic AI. The language varies: OpenAI calls them “core infrastructure,” McKinsey describes “the agentic organisation,” Bain outlines four levels of maturity from basic information retrieval to “multi-agent constellations.” But the direction is consistent. AI is moving from something you ask questions to something you give goals. It searches, writes, analyses, and decides, coming back to a human when it hits a judgment call. Anthropic found that 57% of organisations are already running multi-step agentic workflows, with 81% planning more complex use cases in 2026. BCG found that agents already account for 17% of total AI value in 2025, expected to reach 29% by 2028. If you're talking about AI in 2026, you probably need to be talking about agentic AI.

  4. The model is no longer the bottleneck

    A year ago, the conversation was about which AI model is best. All nine reports have moved past that. The constraint now is everything around the model: whether your data is accessible, whether your systems can talk to each other, whether your processes are designed to let AI operate within them. Anthropic found that the top barriers to scaling AI agents were integration challenges, data readiness, and change management. Not model capability. Bain found the same: three out of four companies said the hardest part of AI adoption was getting people to change how they work, not getting the technology to function.

  5. The gap between leaders and laggards is widening fast

    This comes through in almost every report. BCG found that only 5% of companies globally qualify as “future-built” for AI, but that this minority is pulling further ahead. Bain found that AI leaders are improving EBITDA by 10% to 25%, while laggards are falling behind. Accenture's data shows the strongest performers achieving 2.2x revenue growth and a 37% EBITDA lift. McKinsey's numbers tell the same story from the other end: only 39% of companies report any bottom-line impact from AI at all. Is this narrative designed to create FOMO? Maybe. Are you willing to bet the farm on this and risk being left behind?

  6. AI can make your business faster, but can it make it better?

    Every report acknowledges the time and cost savings AI delivers. OpenAI found workers saving 40 to 60 minutes a day. But the more important finding, repeated across the set, is that the businesses generating the strongest returns have moved beyond efficiency into new capability. EY argues that the companies treating AI as a growth engine rather than a cost-saving exercise are the ones creating distance from competitors. The value is growing fastest in areas that were not possible before, not in areas where existing work is just being done more cheaply.

  7. Humans as the conductor, not the orchestra

    No report in the set argues that humans become irrelevant. What they all describe is a change in what humans do. A good analogy is imagine humans moving from being the writer to the editor — no longer concerned with the production and delivery, but providing the oversight and the exceptions. McKinsey's Agentic Org paper puts it in concrete terms: a human team of two to five people can already supervise 50 to 100 specialised AI agents running an end-to-end process. They project AI systems could complete four days of work without supervision by 2027, with the length of tasks AI can handle doubling every seven months since 2019. BCG found that 43% of heavy AI adopters expect growing demand for generalists who can manage human-agent teams, while 29% expect fewer traditional entry-level roles. There are unlikely to be many roles that will be free from AI disruption. The challenge for individuals and leaders is how well they can adapt. And businesses too.

  8. Organisational design is a big hill to climb

    McKinsey's Agentic Org paper scores 10 on Org Change, the highest single score in the index. OpenAI scores 2. The question is not whether AI changes how organisations work — every report accepts that it does. The question is how much structural change is needed, and how soon. McKinsey argues for fundamental restructuring: small human teams supervising large fleets of specialised agents. Bain takes the opposite approach: fix the foundation first, then let the structure follow. The gap between those two positions is where most businesses will need to place their bet.

  9. Data and infrastructure are key enablers

    Bain scores 9 on Foundation First. Their argument is simple: the difference between companies getting real returns and companies stuck in perpetual pilot mode is almost entirely about groundwork. Data accessibility. System interoperability. Governance embedded into processes. Clear ownership of who is accountable for what. This is not a new finding, but its consistency across the set is notable. Even the platform reports, which score lower on this dimension, acknowledge that poor data and disconnected systems are among the top barriers to adoption. The reports just differ on how much emphasis to place on it versus pushing ahead with the technology that already exists.

  10. Speed is the competitive advantage

    Seven of nine reports score 6 or above on Urgency. EY, focused specifically on the technology sector, puts it most bluntly: the pace of AI innovation now makes speed of execution the single most important competitive variable. Accenture and BCG push hard on the same point, with BCG projecting that AI agent value will nearly double between 2025 and 2028. McKinsey's Agentic Org paper notes that task length in AI is doubling every four months since 2024. Bain estimates that as much as half of overall technology spending could eventually go to AI agents. The collective view is that the window for building AI capability while competitors are still figuring it out is closing fast.

Three actions every business should take

1. Identify a workflow and redesign it around AI

While many of us say that “every day is different,” I think if we were honest with ourselves there is probably more repetition than we let on. Challenge yourself to take a zoomed-out view of your working day, week, or month and identify the end-to-end process that you or your team are doing most often or that takes up the most time. Take this process and strip it back to its core components: what are the inputs, what are the junctures, and what are the outputs. Then rebuild it so it can be executed by AI and overseen by a human.

This is the single action that most consistently separates the businesses in McKinsey's 39% (those reporting bottom-line impact) from the other 61%. The difference is not which tools they bought. It is whether they changed how the work gets done. The early wins cluster in predictable places: data-heavy, high-volume, repeatable work. Software delivery. Knowledge management. Customer onboarding. Internal process automation.

2. Make sure you have good foundations before you build your house

You don't want to hear it. I don't want to have to say it. But I'm afraid the quantity, quality, and interconnectivity of your data is going to be a key determinant of whether you can deliver value from AI in 2026. This means data quality governance, approval flows, ownership rights, process maps, escalation points — all that fun stuff. The list goes on.

Bain scores 9 on Foundation First for a reason: their data shows the difference between leaders and laggards is almost entirely about operational readiness, not ambition. Anthropic's survey supports the same pattern: the top barriers to scaling AI agents are integration challenges, data readiness, and change management. These are not glamorous problems, but they determine whether subsequent AI investment delivers or stalls.

3. Think seriously about what the workforce of the future looks like

In action one we talked about an end-to-end process being executed by AI and overseen by a human. This asks you to think about what your business would look like if all your processes ran this way. What would this mean for the capability, skill sets, and profiles of the people in your organisation? And think about the opportunity this presents — could your business 2x, 10x, 100x productivity by deploying an orchestra of agents?

Bain estimates half of tech spending could eventually go to agents. If that is even directionally right, the question of what people do cannot be deferred. It is better to design that answer deliberately than to have it forced upon you.

Closing remarks

These nine reports tell a fairly consistent story about direction and a divided one about pace and current impact. AI is shifting from a tool that helps people work to an operating layer that runs processes, with people supervising rather than executing. That much, they claim, is settled.

What is not is how quickly this is happening, how much structural change it requires, and how ready most businesses are. What reading all nine back to back makes clear is that the debate has moved on. The question is no longer whether AI will reshape how businesses operate. It is how quickly, how deeply, and who moves first. The window to build capability while competitors are still figuring it out is still open. It won't stay open long.

Matt Rix is a freelance AI, Digital & Data consultant based in the UK. He is the founder of Rix Digital, a consultancy helping brands deliver their commercial agendas using AI with a specialisation in marketing measurement & MMM. Visit rix-digital.com for more info.

Glossary

AI agent. An AI system that is given a goal and works toward it autonomously, making decisions and taking actions across tools and systems without needing to be prompted at every step. The key difference from a standard AI assistant is initiative: an agent acts, not just responds.
Agentic workflow. A business process rebuilt around AI agents handling execution, with humans overseeing the process rather than performing it. An agentic sales workflow, for example, might have an agent researching prospects, drafting outreach, logging responses, and flagging the ones that need a human conversation.
Multi-agent system. Multiple AI agents working in sequence or in parallel, each handling a different part of a process and handing off to the next. One agent researches. One writes. One reviews. One sends. Humans sit above the system, not inside it.
Foundation / infrastructure. The technical and operational layer that AI runs on: clean, accessible data; connected systems; clear governance; defined ownership. Consistently the difference between businesses getting results and businesses stuck in pilots.
Operating model. How an organisation structures itself to deliver value. Includes who does what, how decisions are made, how information flows, and what people are accountable for. An operating model change is not a technology change. It is a structural change to how the business works.
Governance (AI). The rules, processes, and accountability structures that ensure AI systems operate within acceptable boundaries. In practice: who approves AI decisions, what gets logged, what triggers a human review, and who is responsible when something goes wrong. Effective AI governance is embedded in workflows, not managed by committee.
Platform strategy (agentic). How enterprise software platforms need to evolve so that AI agents can operate across them, rather than being siloed within individual applications. The strategic implication: your choice of enterprise platforms increasingly determines the ceiling on what AI can do inside your business.
AI-native. A business or product designed from the ground up around AI as a core operational component, rather than AI added to an existing model. AI-native firms in BCG's data show materially higher revenue per employee than peers using AI as an add-on.
Foundation model. The large underlying AI model (GPT-4, Claude, Gemini, etc.) that specific AI applications are built on top of. Foundation model quality matters less than it did twelve months ago. What matters more is what is built on top of it.
Change management (AI). The organisational work of helping people adapt to AI-driven changes in their roles, processes, and ways of working. Cited by Anthropic as one of the top three barriers to AI adoption, alongside integration and data quality. Consistently underestimated.

References

Chat with the data

Query all 9 reports. Powered by Claude Haiku.

AI

I am an LLM trained on all 9 strategy reports — ask me anything about them.

10 questions remaining
Rix Digital

Fast, affordable Marketing Mix Modelling — powered by AI

Automated data preparation removes the bottleneck that makes traditional MMM slow and expensive. Results in weeks, not months.

Learn more →