All Insights
1 August 2025

How to Start With AI: A Practical Guide for the Perplexed

Tim Baker
Tim Baker
Director, AlchemAI Consulting Ltd

'''

How to Start With AI: A Practical Guide for the Perplexed

Tim Baker

A converted sceptic with 40 years of scar tissue

TCB Consulting Ltd | May 2026


In the last article, we diagnosed the state of paralysis that grips most mature organisations when they confront AI. Faced with a dozen valid questions about ownership, strategy, and risk, the safest-seeming option is to do nothing. To wait for clarity. To form a committee.

This is an understandable reaction. It is also, now, a dangerous one. The cost of inaction is rising faster than the risks of well-managed action. The question is no longer whether to start, but how. Not with a grand, multi-year transformation programme, but with a series of deliberate, intelligent steps that break the paralysis and build momentum.

This is not a definitive manual. It is a guide for the perplexed, designed to replace the blank page of endless questions with a practical framework for getting started.

1. Start with the Problem, Not the Technology

The single biggest mistake mature organisations make is to treat AI as a solution in search of a problem. A new, shiny piece of technology arrives, and the hunt begins for something to do with it. This is backwards. It is the corporate equivalent of buying a sledgehammer and then wandering around the office looking for a nail.

The right place to start is with a persistent, well-understood business problem. Ask the simple question: “If we could solve one recurring bottleneck, service gap, or source of inefficiency, what would it be?”

For a government department, that might be the backlog of citizen inquiries or the detection of fraudulent benefit claims. For a private enterprise, it might be predictive maintenance on key assets or the cost of handling routine customer service queries. The key is to anchor the first AI project in a real-world, measurable goal.

Before you even consider AI, it is worth asking if the problem can be solved with simpler automation or better data analytics. Sometimes, the answer is a better spreadsheet, not a neural network. AI should be the right tool for the job, not the default answer to every question.

2. Do the “Unsexy” Work: The Readiness Assessment

Before you can build, you must survey the foundations. For AI, this means a frank assessment of your organisation’s readiness. This is the unsexy, unglamorous work that makes everything else possible.

Data Maturity: This is the elephant in every boardroom. Most data in established organisations is a mess. It is siloed in legacy systems, duplicated across spreadsheets, or simply ROT (Redundant, Obsolete, and Trivial). An honest data audit is a non-negotiable prerequisite. You must know what data you have, where it is, how clean it is, and whether you have the right to use it. For government, this is doubly important, as using citizen data in new ways can breach public trust and privacy standards before a single line of code is written.

Infrastructure & Security: Can your current systems handle the load? AI can be computationally expensive. You need to know if your infrastructure is ready, whether it is on-premise or in the cloud. For government and regulated industries in Jersey, this also means ensuring that any solution complies with our island’s data sovereignty and security requirements.

Skills & Culture: Do you have the people to do this? A successful AI project requires more than just data scientists. It needs a multi-disciplinary team of IT, legal, compliance, and subject matter experts from the business who understand the problem you are trying to solve. More importantly, is the wider workforce ready for the change?

3. The Execution Framework: Scan > Pilot > Scale

Resist the temptation of a “big bang” rollout. The most successful AI adoptions follow an iterative, three-step process.

Scan: Identify a handful of high-impact, low-risk use cases where a successful pilot could deliver genuine value. A good first project is one that automates a tedious internal process, freeing up skilled staff for higher-value work. The Commonwealth of Pennsylvania’s 2025 pilot of ChatGPT for government employees is a good example. They gave the tool to 175 staff and studied how they used it for day-to-day tasks like creating documentation and improving communications. It was a low-risk way to understand the real-world potential.

Pilot: Build a prototype in a controlled, sandboxed environment. Keep it short – an 8-to-12-week timebox is a good discipline. The goal is not just to prove the technology works, but to see how it integrates into a real human workflow. If it does not make someone’s job easier, it has failed, no matter how clever the algorithm.

Scale: Only when a pilot has demonstrated clear, measurable value should you consider scaling it. This is the point where it moves from a project to a product, and it requires robust monitoring and governance to ensure it continues to perform as expected over time.

4. Governance: The Grown-Up Differentiator

For a startup, governance is often an afterthought. For a mature organisation, it must be a forethought. This is your key differentiator. It is how you build trust and manage risk.

Human-in-the-Loop: For any high-stakes decision – be it hiring, lending, or assessing a citizen’s eligibility for a service – the AI should only ever be an advisor. It provides the recommendation; a human makes the final decision. This principle is non-negotiable.

Transparency: You must be able to explain how your AI systems work. Citizens have a right to know when and how an automated decision has been made about them. The UK’s Algorithmic Transparency Recording Standard is a useful model here. For regulated businesses, explainability is not just good practice; it is a compliance necessity.

Ethics & Bias: An AI is only as good as the data it is trained on. If your historical data reflects historical biases, your AI will learn and amplify them. Establishing a small, multi-disciplinary ethics committee to review training data and proposed use cases is a critical step before any system goes live.

The First Five Questions

Breaking the paralysis of AI does not require having all the answers. It requires knowing the right first questions to ask. On Monday morning, instead of commissioning another report, you could gather the right people in a room and ask them this:

  1. What is the single most tedious, repetitive, or low-value task our skilled people are forced to do?
  2. What data would we need to automate it, and where is that data right now?
  3. Who are the three people – from IT, the business, and legal – who would need to be in the room to approve a 12-week pilot?
  4. What does success for that pilot look like in measurable terms?
  5. What is the single biggest risk, and how could we mitigate it with a human-in-the-loop?

Answering those five questions is the start. It is how you stop talking about AI and start doing it. It is how you get a grip on the tiger’s tail. '''

All InsightsDiscuss this with Tim →