Article
    by Tanel Tensing, Group Head of Engineering

    What it takes to deliver AI-first at enterprise scale

    Every enterprise technology organization has adopted AI tools for software delivery. But almost all of that adoption is individual — people enhancing their own work, not organizations changing how they deliver. When leaders look at the bigger picture, the gains are far smaller than expected.

    Service

    Data and AI Technology and Engineering

    Industry

    Enterprise

    This has led many enterprises into a measurement paradox: they can’t justify further AI investment because the impact of what they’ve already done remains hard to measure, with adoption happening at the individual rather than systemic level.

    At the same time, the early hype around vibe coding set unrealistic expectations. Enterprises experimented, saw promising results on small or isolated use cases, and then hit a wall as complexity increased. For many, that gap has turned initial excitement into skepticism about whether AI can support serious, large-scale delivery at all.

    The skepticism all this has created is understandable, but it no longer reflects the current state of the technology. Large language models, agentic frameworks, and the methods around them have crossed a threshold that makes AI-first software delivery at a professional enterprise scale a practical reality. It’s still not easy, but for the first time, the technology is mature enough to support a fundamentally different delivery model - one where AI agents are core participants in the process, not just tools that individuals happen to use.

    The difference between AI-assisted and AI-first is the difference between giving people better tools and rebuilding the operating model itself.

    What changes when AI becomes a core delivery capability

    The shift from AI-assisted to AI-first touches every part of how software gets delivered. In our work with enterprise clients, three areas consistently require the most rethinking.

    The first is how teams work.

    In an AI-first model, AI agents operate across the full delivery workflow: from discovery and problem framing to implementation and validation. Human team members shift toward directing and validating rather than executing, and that shift reshapes every role on the team: product owners become structured context suppliers, architects become workflow designers, and testers become guardrail designers. It’s not about replacing people but redirecting expertise.

    This changes team composition in two ways: teams get smaller, and the people on them get more senior. When AI handles execution, the remaining human work is problem decomposition, architectural decisions, and quality judgment. A smaller senior team can deliver what previously required a much larger group, but only if the people on it have the experience to direct AI effectively.

    The second is how delivery economics work.

    Traditional models price software delivery by effort: people multiplied by time. When AI agents handle a significant share of execution, the cost structure changes fundamentally. That shift makes previously uneconomical work viable, for example, legacy modernizations that once required years and millions can now be scoped as months-long engagements. This expands not just margins, but the range of work organizations can take on.

    At the same time, the metrics leaders rely on to track delivery - velocity, throughput, cost per feature - were designed around human work patterns and don’t translate cleanly to hybrid human-AI delivery, where work moves through the pipeline completely differently. This makes it harder to benchmark teams, compare vendors, and build the business cases that boards expect. Organizations that figure out how to measure AI-first delivery accurately will have a real advantage in making confident investment decisions.

    The third is how delivery governance and accountability work.

    Traditional quality gates and review cycles don't disappear in an AI-first model. If anything, governance gets harder. Organizations need to maintain existing compliance requirements while adding new ones: traceability of what AI generated versus what humans authored, auditability of AI-driven decisions, and clear ownership when something goes wrong. When an AI agent introduces a security flaw or makes a poor architectural choice, the traditional chain of accountability doesn't map cleanly. Someone needs to own the guardrails, someone needs to own the output, and the lines between those responsibilities are new and unresolved in most organizations.

    These changes are deeply connected. Smaller, more senior teams need new economic models to justify their composition. New economic models need reliable measurement, which requires clear governance. And governance only works when the people making decisions are senior enough to understand what AI is producing. None of these can be solved in isolation.

    Evidence from real enterprise environments

    At Nortal, we started applying AI systematically to enterprise delivery early. And as the technology has matured over the past year, so has our approach. We're now running AI-first delivery on production systems in regulated, high-stakes environments.

    Rebuilding from legacy systems

    A national tax authority used AI to reverse-engineer its legacy tax management system, extracting both technical and business requirements from the existing codebase. This approach — we call it AI Legacy Archaeology — turns the system that runs the business into the specification for the one that replaces it. AI then drives the implementation of the new platform based on those extracted specifications.

    Compressing time to production

    A federal procurement marketplace in a major European economy went from zero to production in four months - unprecedented for a highly regulated public sector platform handling millions of records and strict security standards.
    Turning complexity to delivery

    A defense sector system required navigating thousands of pages of NATO integration standards - a rigorous bottleneck for any human team, but perfect input for an AI-first workflow. AI agents processed the documentation, extracted relevant specifications, and implemented the solutions, turning a mountain of standards into working code.

    The pattern across these projects is consistent: AI didn't just accelerate one phase. It changed the entire delivery workflow. But the human to AI effort ratio can vary dramatically by delivery phase. In some phases AI does 90% of the work, in other humans do 90%. AI-first doesn't mean AI-only - it means knowing where AI leads and where humans lead.

    AI-first delivery isn’t about adoption, it’s about redesign

    The gap that matters isn't in tool adoption; it's between teams where individuals use AI and teams where AI is woven into the delivery process itself. Most enterprises we work with find that gap is wider than they expected. They've invested in tools and training, but the delivery workflow, team structure, and economics haven't followed.

    The organizations pulling ahead aren't the ones with the best AI tools, they're the ones restructuring delivery around what AI makes possible. New team models, new economics, new governance. The tools are increasingly available to everyone; what takes time to build is the institutional muscle to use them as a delivery system.

    And the enterprises doing this now aren't just delivering faster; they're taking on work that traditional delivery models simply can't match.

    Explore our AI-First Software Delivery capability

     

    Compress timelines, manage complexity and get real value with hybrid human–AI teams through AI-first software delivery, whether you're building anew or modernizing legacy systems.

    Read more

    Why cyber exercising matters

    • Reveals critical gaps in technical controls, escalation paths, and decision-making workflows.
    • Fosters organisation-wide collaboration, improving coordaination and communication across all roles, functions, and levels. Builds confidence under pressure, giving participants, groups, and organisations muscle memory they can rely on.
    • Exposes participants to real-world attack techniques, improving detection, containment, and familiarity.
    • Strengthens regulatory and stakeholder alignment by stress-testing notification and reporting procedures in a simulated environment.
    • Fosters a culture of continuous improvement by turning lessons from exercises into actionable changes across people, processes, and technologies. 

    Not sure where to start?

    You bring the challenge. We bring 25+ years of making impossible things work.

    Related content

    Article
    27.03.2026

    What it takes to deliver AI-first at enterprise scale

    Case study
    13.03.2026

    AI‑first approach turns a complex legacy application into a clear modernization blueprint

    Case study
    5.03.2026

    Turning legacy uncertainty into modernization clarity with AI‑first delivery