AI ADOPTION FOR BI AND ANALYTICS

Move from curiosity to a working pilot — without wasting six months on the wrong use case.

  • You lead analytics, product, or strategy at a growth-stage company and know AI should be changing how your team works — but haven't found the right use case yet. Or your leadership is asking why your BI function isn't using AI, and you need a credible plan that isn't a vendor's pitch deck.

  • Conversational analytics (plain-language queries over your warehouse), automated insight generation, churn and retention modelling, sales pipeline intelligence, and dynamic customer segmentation. Not all will be right for you — the discovery phase finds the ones that are.

  • A 6-week sprint in three phases: use case discovery (weeks 1–2), evaluation and tool selection (week 3), and a focused pilot build with readout (weeks 4–6). You end with a working proof of concept, an honest assessment of what production would take, and a 6-month roadmap.

  • They start with the technology — buy a platform, commit to a vendor, tell the team to "explore AI" — without a clear problem to solve. Six months later they have a POC nobody uses. The right starting point is the business question: where are decisions slow, expensive, or low-quality because of how your analytics works?

  • A functional data warehouse (Snowflake, BigQuery, Databricks, or equivalent), at least one analyst or data engineer, and reasonably clean data for the use case you want to pilot. If you're not there yet, a Data & Analytics Audit is the better starting point.

  • They start with the technology — buy a platform, commit to a vendor, tell the team to "explore AI" — without a clear problem to solve. Six months later they have a POC nobody uses. The right starting point is the business question: where are decisions slow, expensive, or low-quality because of how your analytics works?

WHO IS THIS FOR

You lead analytics, product, or strategy at a growth-stage company and you know AI should be changing how your team works. You've probably run an LLM demo or two. Maybe you've bought a tool. But you haven't figured out where AI actually creates business value in your specific context — and you don't want to burn six months and a meaningful budget finding out the hard way.

Alternatively: your leadership team is asking why your BI and analytics function isn't using AI yet, and you need a credible plan, not a vendor's pitch deck.

This engagement is for both situations.

USE CASES WE SEE MOST OFTEN

To make this concrete — these are the AI applications in BI and analytics that I see creating real value for companies at your stage:

Conversational analytics. Letting business stakeholders query your data warehouse in plain language, without writing SQL or waiting for an analyst. Reduces ad-hoc request volume significantly and improves analyst leverage.

Automated insight generation. Replacing the weekly analyst narrative with AI-generated commentary on what changed, why it likely changed, and what to watch. Frees analysts for higher-value work.

Churn and retention modelling. Using LLMs and ML together to identify early behavioural signals of churn, particularly in B2B contexts where the signal is subtle and the cost of getting it wrong is high.

Sales analytics and pipeline intelligence. Connecting CRM data to product usage and external signals to surface which accounts need attention and why — feeding directly back into sales team workflows.

Customer segmentation at scale. Moving from static cohorts to dynamic, behaviour-driven segments that update automatically and feed personalisation and commercial targeting.

Not all of these will be right for your company. The use case discovery phase exists precisely to find the ones that are.

What this engagement delivers

A 6-week sprint structured in three phases:

Phase 1 — Use case discovery (Weeks 1–2). I interview your analytics team, your business stakeholders, and your data consumers. I'm looking for the 3–5 places where AI could meaningfully change how decisions get made — not where it's technically interesting, where it creates business value. At the end of week 2, you have a shortlist with a clear rationale for each.

Phase 2 — Evaluation and selection (Week 3). We size the value of each use case, assess the data readiness required, and select the one to pilot. I evaluate relevant tools and vendors against your specific context — your stack, your team's capability, your budget. You get a recommendation with honest trade-offs, not a feature comparison matrix.

Phase 3 — Pilot build and readout (Weeks 4–6). We build a focused, scoped pilot of the selected use case. Not a full production deployment — a working proof of concept that is good enough to evaluate whether the use case is real and whether the approach is right. At the end of week 6, you have a working POC, a clear assessment of what it would take to get to production, and a roadmap for the next 6 months.

What most companies get wrong

They start with the technology. They buy a platform, or commit to a vendor, or tell their team to "explore AI" without a clear problem to solve. Six months later they have a POC nobody uses and a data team that's exhausted and sceptical.

The right starting point is the business question — specifically, which decisions in your company are currently slow, expensive, or low-quality because of how your analytics works? AI is useful precisely where it can change that. Everything else is noise.

What you need to have in place

AI adoption on top of a broken data foundation doesn't work. Before this engagement makes sense, you should have: a reasonably functional data warehouse or lake (Snowflake, BigQuery, Databricks, or equivalent), at least one analyst or data engineer on your team, and clean, documented data for the use case you want to pilot. If you're not there yet, a Data & Analytics Audit is likely the better starting point.

If you're unsure, tell me on the intro call and I'll give you an honest read.

Frequently asked questions

Do we need to have a specific AI tool already? No. Tool and vendor selection is part of the engagement. I'm tool-agnostic and I'll recommend based on your stack and context, not on any partnership arrangement.

What if our data isn't clean enough for AI? It's a fair concern and it stops more pilots than anything else. We'll assess data readiness as part of Phase 1. If the data isn't ready for the use case you want, I'll tell you what needs to be fixed first — and that might be a shorter, separate piece of work before we proceed.

We have internal data scientists. Is this still useful? Yes, often more so. The most common gap isn't technical capability — it's knowing which use case to prioritise and having someone senior enough to drive adoption with business stakeholders. I can work alongside your technical team and focus on the product and business layer.

How is this different from what an AI vendor would show us? A vendor shows you what their product can do. I start with your business problems and work backwards to the right tool. I have no financial relationship with any vendor and no incentive to recommend anything other than what's right for you.

Can this be done fully remotely? Yes. Stakeholder interviews and readouts over video call, async communication in between. I've run regional programmes across SEA markets without being on-site for every session.

What does "working POC" mean exactly? A proof of concept that runs on your actual data, answers a real business question, and is good enough to show your leadership team and make a production decision from. It's not polished enough to hand to end users, but it's far more than a demo on synthetic data.

Let's find the use case that's actually worth building.

A 30-minute call, free. Tell me what your team has tried and what you're hoping to do. I'll tell you whether this sprint is the right fit.