Stop Writing Requirements on Stone Tablets

Every large organisation now has an “AI strategy.”

Most still have a 1990s product development process.

That gap is where value goes to die – and where customers end up waiting too long for products that don’t quite do what they needed, built by teams that spent months translating documents nobody fully agreed on.

We don’t iterate requirements. We engrave them.

A product manager captures a customer need. It becomes a ticket. The ticket becomes a page. The page gets handed to a technology team who interpret it, build something, and return it for testing – months later, significantly over budget, and subtly wrong in ways nobody can quite explain.

Stone tablets. Carved once. Nearly impossible to change without starting again.

This isn’t a criticism of the people. Large enterprises have talented business analysts, product managers, and delivery teams. The problem is the methodology – and the assumption, rarely examined, that requirements are a document-creation exercise rather than a living, iterative, testable process. The customer whose need initiated the whole exercise sits at the far end of a chain that was never designed to preserve intent.


Two Speeds, One Organisation

It would be easy to frame this as a battle between old and new – traditional teams clinging to document-centric processes, modern teams embracing AI-assisted delivery. The reality is more nuanced, and more interesting.

Most large organisations today are running both in parallel. Traditional teams – often managing complex legacy products, regulated workflows, or deeply embedded enterprise tooling – operate with structured, document-driven requirements processes that have served them adequately for years. Modern teams, typically building newer digital products or customer-facing capabilities, are beginning to experiment with AI-assisted approaches and discovering that the leverage isn’t where they expected.

The difference in outcomes is becoming hard to ignore.

Traditional teams tend to concentrate AI at the back end of the process – code generation, automated testing, deployment tooling. The productivity gains are real but bounded. The requirements arrive as they always have, carrying the same ambiguities, the same gaps, the same distance between what the business meant and what the technology team heard.

Modern teams are discovering that AI is most valuable when it’s present across the entire value chain – not just in the engine room, but in the hands of those defining the problem in the first place. When AI assists at the planning stage, the portfolio level, and the requirements stage, the downstream benefits compound. Delivery teams spend their energy on the customer rather than on resolving ambiguity. Leadership can focus on strategy rather than managing the consequences of misaligned execution.

This is not a simple transition. Building software at enterprise scale – with multiple stakeholders, governance requirements, legacy dependencies, and genuine regulatory constraints – is genuinely hard. Anyone claiming otherwise hasn’t done it. The point isn’t that modern approaches make this easy. It’s that they address the difficulty at the right point in the process, rather than absorbing it silently in delivery.


The Leverage Point Most Organisations Are Missing

The data is consistent: organisations embedding AI across the entire development lifecycle see double-digit productivity gains, materially higher quality, and faster time to market. Those experimenting only in coding do not. Top-performing organisations are six to seven times more likely than peers to embed AI across design, requirements, coding, testing, and deployment – and the firms seeing the largest customer experience improvements are precisely the ones that redesigned how work flows end to end.

McKinsey, “Unlocking the Value of AI in Software Development,” November 2025; “How an AI-Enabled Software Product Development Life Cycle Will Fuel Innovation,” February 2025

The leverage point most organisations are missing is the front end of the process.

When a product manager or business analyst works with an AI tool to refine requirements, the AI asks clarifying questions, surfaces edge cases, and flags internal contradictions – before a single line of code is written. The conversation itself becomes the refinement process. What previously took multiple rounds of back-and-forth gets compressed dramatically. More importantly, the customer need that initiated the work stays legible throughout, rather than getting lost in successive layers of interpretation.

The discipline of Behaviour-Driven Development (BDD) has demonstrated the underlying principle for years – and it’s worth being precise about what this means in practice, because it’s often misunderstood. This is not about asking business analysts to learn to code. It’s about changing the output of requirements work – from prose documents to structured, testable specifications. A requirement written as:

“Given a customer has insufficient funds, When they attempt a payment, Then the transaction is declined and a notification is sent within 30 seconds”

– is simultaneously readable by a business stakeholder, usable by a developer, and directly consumable by a test automation framework. No translation layer. No interpretation gap. AI tools can now help teams co-author these specifications through conversation, surfacing the edge cases and constraints that prose documents routinely miss. Practitioners adopting this approach consistently report significant reductions in late-stage defect discovery and the costly rework that follows.

Research from Boehm, IBM, and Hewlett-Packard consistently found that defects introduced at the requirements stage cost materially more to fix downstream – the multiplier varying by project size and complexity, but the direction never changing. AI doesn’t change that arithmetic. It changes how thoroughly ambiguities are surfaced at the front end, where the cost of correction is lowest and the benefit to the customer is highest.

Boehm & Papaccio, IEEE Transactions on Software Engineering, 1988; Scaled Agile Framework, “Behaviour-Driven Development,” November 2025


When Methodology Becomes Competitive Advantage

Klarna is an instructive case – not as a template to copy, but as an illustration of what a different question produces. Their AI assistant handled the equivalent work of 700 full-time agents within its first month, drove an estimated $40 million in profit improvement in 2024, and delivered measurably better customer outcomes: resolution times dropped from eleven minutes to under two, with customer satisfaction scores matching human agents.

OpenAI, “Klarna’s AI Assistant,” 2024

The less-discussed dimension is the question they started with.

It wasn’t “how do we make our developers faster?” It was “what would this look like if we redesigned how work flows, with the customer at the centre?” Ninety-six percent of employees use AI daily – not because it was mandated, but because the organisation redesigned around it.

Many organisations are asking their delivery teams to use AI. The organisations seeing the largest returns are asking a harder question: how does AI change what we should be doing at every stage – from the moment we identify a customer need to the moment we measure whether we solved it?

The answer to that question looks different in a heavily regulated financial institution than it does in a Swedish fintech. But the question itself is universal.


The Real Barriers – Named Honestly

Any serious discussion of this topic has to acknowledge why the transition is difficult, because the barriers are structural, not just cultural.

Governance and compliance constraints are real. In regulated industries, changes to how requirements are documented, tested, and approved touch audit trails, change management frameworks, and risk sign-off processes. Introducing AI-assisted specification work involves legitimate questions about model governance, output auditability, and data handling. These deserve rigorous answers, not dismissal.

Capability frameworks haven’t kept up. Most business analyst and product management roles in large organisations are still assessed on documentation throughput – volume of tickets, completeness of acceptance criteria, formatted user stories. Until performance models reward specification fidelity and customer outcome alignment over document volume, the incentive to change practice doesn’t exist.

Existing tooling reinforces existing behaviour. Document-centric project management tools are excellent at what they were designed for. They were not designed for iterative, AI-assisted specification work. Changing practice without changing – or extending – tooling creates friction that erodes adoption.

The skills gap is genuine. AI collaboration literacy – knowing how to prompt effectively, validate outputs, and use AI as a thinking partner rather than a search engine – is not yet a standard capability in most business or product teams. Building it requires deliberate investment, not just tool access.

None of this argues against change. It argues for investing in change properly – with clear governance frameworks, updated capability models, tooling that supports the new methodology, and training that builds genuine competence rather than surface familiarity.

The cost of not changing is also structural. Slower product delivery. Higher defect rates post-launch. Longer resolution cycles when things go wrong. And customers who quietly shift their engagement toward organisations that ship faster, break less, and respond more quickly when they don’t.


Getting the Problem Right – So You Can Focus on the Customer

There is a simple reframe that clarifies why this matters beyond internal efficiency.

When requirements are ambiguous, teams spend their time managing themselves – resolving disputes between what was meant and what was built, reworking features that passed testing but missed the point, explaining to stakeholders why the product behaves differently from what was specified. The customer waits while the organisation processes its own complexity.

When requirements are precise – when AI has helped the team surface edge cases, resolve contradictions, and structure intent before a line of code is written – teams spend their time on the customer instead. New features that genuinely address unmet needs. Faster iteration on feedback. Less time in the gap between intent and execution, more time creating value at the edge where the organisation meets the people it serves.

This is what AI across the value chain actually unlocks.

Not just faster code. Not just cheaper testing. A fundamentally shorter path from customer insight to customer outcome – with fewer detours through organisational complexity along the way.

That’s the prize. And it’s available to large enterprises as much as to digital natives – if the methodology changes to meet it.


The Uncomfortable Arithmetic

McKinsey’s 2025 research is unambiguous: only 39% of organisations report any measurable enterprise-level impact from their AI investments. The firms seeing real returns are redesigning workflows. The rest are running experiments.

McKinsey, “The State of AI in 2025,” November 2025

The organisations that invest in AI as a genuine end-to-end methodology shift – governing it properly, building the capability, and applying it from the planning stage through to delivery – will find they can move with the responsiveness of a digital native while retaining the trust and scale of an established institution. Customers will notice. They always do.

The organisations that concentrate AI in the engine room while leaving the front end of the process unchanged will keep delivering products that almost do what customers needed.

AI will not save a broken product methodology.

But it will ruthlessly expose one.


This article reflects my personal views on product development methodology and AI adoption in large organisations.

What’s your experience? Are you seeing genuine end-to-end AI adoption in your organisation’s product development process – or is it still concentrated at the delivery layer? I’d be interested in your perspective in the comments below.


References

  1. McKinsey, “Unlocking the Value of AI in Software Development”, November 2025
  2. McKinsey, “The State of AI in 2025: Agents, Innovation, and Transformation”, November 2025
  3. McKinsey, “How an AI-Enabled Software Product Development Life Cycle Will Fuel Innovation”, February 2025
  4. Boehm, B. & Papaccio, P., “Understanding and Controlling Software Costs”, IEEE Transactions on Software Engineering, October 1988
  5. OpenAI, “Klarna’s AI Assistant”, February 2024
  6. Scaled Agile Framework, “Behaviour-Driven Development”, framework.scaledagile.com, November 2025
  7. AI Magazine, “Klarna: Using AI for Growth, Efficiency & Executive Cloning”, October 2025
  8. BusinessToday, “Klarna CEO Embraces AI Vibe Coding to Speed Up Product Development”, September 2025

Leave a comment