All Insights
1 February 2026

Whose Problem Is AI, Anyway?

Tim Baker
Tim Baker
Director, AlchemAI Consulting Ltd

Whose Problem Is AI, Anyway?

Tim Baker

A converted sceptic with 40 years of scar tissue

TCB Consulting Ltd | April 2026


When a mature organisation first confronts the reality of Artificial Intelligence, the reaction is often a kind of paralysis. It is not a paralysis born of ignorance, but of abundance. There are too many questions, too many stakeholders, and too many perceived risks. The result is a stalemate. Everyone agrees something must be done, but nobody can agree on who should do it, how they should do it, or what “it” even is.

This is the great stumbling block of AI adoption in grown-up companies and governments. Unlike a startup, which can build itself around a new technology from day one, a mature organisation has to graft it onto a living body of existing processes, legacy systems, and established departmental structures. The result is a corporate version of an organ transplant, and the risk of rejection is high.

The Ownership Vacuum

The first and most fundamental question is the simplest: whose problem is AI?

Ask the IT department, and they will tell you it is a technology problem. It is about data, infrastructure, and security. It belongs to them.

Ask the HR department, and they will say it is a people problem. It is about skills, training, and the future of work. It belongs to them.

Ask the Operations department, and they will argue it is a process problem. It is about efficiency, workflow, and quality control. It belongs to them.

Ask the Strategy department, and they will insist it is a competitive opportunity. It is about market positioning and future revenue streams. It belongs to them.

Ask the Finance department, and they will tell you it is a budget problem, and that nobody owns anything until they can show a clear return on investment.

They are all right, and they are all wrong. AI is not an IT, HR, or strategy problem. It is all of them at once. And because it is everyone’s problem, it becomes no one’s problem. It falls into the cracks between the silos. This is the ownership vacuum, and it is where most AI initiatives go to die.

Top-Down, Bottom-Up, or Stuck in the Middle?

This vacuum leads directly to the second question: how should AI be introduced? Should it be a top-down mandate from the board, or a bottom-up movement from innovative teams on the ground?

The top-down approach has the advantage of strategic alignment and budget authority. The board issues an edict: “We will become an AI-first company.” The problem is that such edicts often fail to connect with the reality of day-to-day work. They become abstract goals, disconnected from the specific problems that AI could actually solve. The result is a strategy that sits on a shelf, while the business carries on as before.

The bottom-up approach has the advantage of genuine enthusiasm and practical application. A small team in marketing starts using an AI tool to write copy. A data analyst in finance builds a predictive model in their spare time. The problem is that these grassroots efforts are often fragmented, inconsistent, and ungoverned. They create a “shadow AI” ecosystem, with multiple tools, no central oversight, and significant security and compliance risks.

Neither approach works on its own. The top-down mandate without bottom-up buy-in is sterile. The bottom-up experiment without top-down strategy is chaotic. The organisation ends up stuck in the middle, with a handful of isolated pilot projects that never scale, and a leadership team that wonders why their grand AI vision has failed to deliver.

The 95% Failure Rate

This is not a theoretical problem. A 2025 study from the MIT Media Lab found that a staggering 95% of corporate AI pilots fail to deliver any measurable return on investment. The reason, the study concluded, was not the technology. It was the organisation. The human factors — skills gaps, cultural resistance, and a failure to align the technology with business workflows — were the primary causes of failure.

Consider the case of MD Anderson Cancer Center, which spent $62 million on IBM’s Watson for Oncology. The project was abandoned after it was found that the AI was making unsafe treatment recommendations. It had been trained on hypothetical patient data, not real-world clinical cases. It was a technology in search of a problem, disconnected from the reality of the medical workflow. It was a classic, and very expensive, failure of organisational, not technical, design.

Contrast this with JPMorgan Chase’s COIN (Contract Intelligence) platform. It was designed to solve a single, well-defined problem: reviewing commercial loan agreements. It was a joint effort between the legal and technology departments. The result? A system that reduced 360,000 hours of lawyer and loan officer time per year to a matter of seconds. It succeeded because it was a focused solution to a real-world problem, with clear ownership and a measurable outcome.

The Paralysis of the Blank Page

For most mature organisations, the sheer number of these questions is overwhelming. Should we have an AI policy before we start? Who should write it? How do we manage the data? How do we ensure security and confidentiality? How do we handle the inevitable job losses? How do we compete with the criminals and the bad actors who are not burdened by any of these questions?

The list is endless. And so, faced with a blank page and a thousand questions, the most common response is to do nothing. To wait. To form a committee. To commission another report. The paralysis sets in.

This is the challenge. The questions are real. The risks are real. But the cost of inaction is rising every day. The next article in this series will explore the practical first steps that any organisation can take to break this paralysis and begin to get a grip on the tiger’s tail. But first, you have to decide whose problem it is. And the answer is: it’s yours.

All InsightsDiscuss this with Tim →