Article
    by Tanel Tensing, Group Head of Engineering

    Inside our shift to AI‑first delivery

    Our shift to AI-first delivery didn’t come from a new toolset, but from how we supported teams in practice. A small group of specialists works directly with delivery teams, helping them apply AI-first methods in real projects and making the approach scalable across the organization.

    Service

    Data and AI Technology and Engineering

    Industry

    Enterprise

    AI tools are now part of most engineering teams, yet the expected productivity gains often fail to appear. Pilots show promise, but day‑to‑day adoption fades as people return to familiar ways of working. Teams debate whether AI‑first methods apply to their systems, and leaders struggle to find practical direction. 

    This is the gap we set out to close. Software is at the core of our business, so changes in how it gets built matter directly to how we operate. We couldn’t afford to wait for best practices to emerge. We had to figure it out ourselves, on real projects, in our specific context.

     

    Why early wins and failures mattered

     

    The journey began while preparing a proposal for a medical portal for a European healthcare association. It was a typical, feature‑rich application estimated at roughly 300 person‑days of work. Instead of planning a traditional delivery, we asked whether the work could be done in a completely different way.

    We assembled a small group of engineers, gave them two weeks, and removed constraints.

    On day one, they chose their AI stack.

    On day two, they had generated part of the solution but had already slipped back into manual coding - it was how they had always worked.

    We set one rule: old methods were off-limits. They had to figure out how to do it AI‑first.

    By the end of two weeks, they had built 80% of the application. The remaining 20% required decisions only the client could make. 

    After that first success, we tried the same approach on a second application with different tools and failed. The output was inconsistent and technically sloppy. A third attempt produced something acceptable but not exceptional.

    Those experiments led to an insight. 

    The first success relied on a narrow Google stack that is not practical for most real‑world projects. It worked because the stack had strong guardrails that kept the AI on track. If Google could design guardrails for its ecosystem, the pace of innovation suggested other stacks would soon follow. 

    We also learned that training alone is not enough. People revert to familiar methods quickly. To change long‑established habits, teams need someone working beside them, guiding them in their daily work. 

    Building the AI-First Delivery Accelerator

    These lessons made it clear that we needed a structured way to support teams as they adopted new methods. That realization marked the shift we needed to lead: a move from traditional delivery toward a consistent, AI‑first way of working across projects and regions.  

    The result was the AI‑First Delivery Accelerator. At its center is a global team with practical experience in AI‑first delivery, guiding the work in a continuous loop: 

    • They research emerging AI tools and methods.

    • They work alongside delivery teams to put those approaches into practice on real projects.

    • They turn what works into reusable tools and guidance that any team can start from.

    Every sprint moves the methodology forward. Every engagement adapts it to the team’s context. And every result feeds back into the next iteration. 

    Challenges the accelerator solves

    Most teams start open-mindedly, but still sceptical. They believe their codebases, compliance rules, or security constraints make AI‑first methods unrealistic. Many have tried on their own and run into limits, so they assume the same will happen again. 

    Another challenge is the sheer pace of change: new tools, models, and techniques appear daily. Expecting every individual to keep up while delivering production work is unrealistic. 

    This is why a dedicated accelerator team works. They help teams apply AI-first methods in their own context, addressing real constraints while absorbing the pace of change, filtering what matters, and putting it into practice.

     

    From early trials to wider adoption

    After an accelerator sprint, teams usually say the same things: it worked better than they expected, and they now believe AI‑first delivery is possible in their environment.  

    But early success is only the beginning. Follow‑up support helps teams avoid sliding back into old habits and ensures they continue to benefit as AI evolves. 

    We started with small and midsize projects and moved to large, complex ones in both greenfield and brownfield settings.

    In the next four months, we completed 16 accelerator sprints across 7 regions, covering customer projects, proofs‑of‑concept, and internal initiatives. The formats varied, but the core model stayed consistent.

    AI‑first delivery takes practice, not tools

    The shift to AI‑first delivery is not about tools. At its core, it is a change in how people work. That kind of shift does not come from training alone, and it will not happen by waiting for the perfect moment. Real progress comes from practical experimentation, steady learning, and support as the work evolves.


    Every organization will face its own constraints. But the pattern we have seen across sixteen sprints is consistent: AI‑first delivery takes root when someone works beside the team through the change. Not just tooling or training, but guided practice on real work.

    Explore our AI-First Software Delivery capability

     

    Compress timelines, manage complexity and get real value with hybrid human–AI teams through AI-first software delivery, whether you're building anew or modernizing legacy systems.

    Read more

    Why cyber exercising matters

    • Reveals critical gaps in technical controls, escalation paths, and decision-making workflows.
    • Fosters organisation-wide collaboration, improving coordaination and communication across all roles, functions, and levels. Builds confidence under pressure, giving participants, groups, and organisations muscle memory they can rely on.
    • Exposes participants to real-world attack techniques, improving detection, containment, and familiarity.
    • Strengthens regulatory and stakeholder alignment by stress-testing notification and reporting procedures in a simulated environment.
    • Fosters a culture of continuous improvement by turning lessons from exercises into actionable changes across people, processes, and technologies. 

    Not sure where to start?

    You bring the challenge. We bring 25+ years of making impossible things work.